TheMrIron2's blog
Welcome to the personal blog of TheMrIron2
Color
Background color
Background image
Font Type
Font Size
    TheMrIron2 A movement called Operation Rainfall was responsible for the worldwide release of three Japanese games; the revered Xenoblade Chronicles, sleeper hit Pandora's Tower and The Last Story. Often overlooked as Xenoblade's little brother, The Last Story is an action-RPG by the creator of the Final Fantasy series - alongside the composer of FF1. Don't let the "RPG" in the title fool you; this is no traditional RPG. Combat is an intuitive real-time system with cover mechanics, ducking and rolling with none of the turn-based mechanics from Final Fantasy games prior to FF12. The game revolves around the protagonist Zael, who - alongside his band of mercenaries - quickly becomes enveloped in a much bigger problem than monster cleanup - saving the land, to be precise.

    [​IMG]

    STORY
    The title tells you that this game has its story at the very core of the game. Zael, Dagran, Syrenne, Mirania, Lowell and Yurick compose the 6-man squad that stays with you throughout the plot. While clearing out the opening dungeon, Syrenne gets critically wounded. Zael, with a mix of anger and sorrow, has an outburst and cries out about losing the people close to him - and his outburst is heard by a mysterious power, which grants him "The Outsider". It transpires that Zael is, deep down, full of sorrow after a tragic childhood - just like this mysterious power - and going forward, he can use the power of The Outsider to divert enemy attention and fire towards himself to assist his teammates. This is one of the main mechanics of the game and while you're using this power - "Gathering", specifically - your teammates' magic also conjures twice as fast. It's very beneficial, but you can get in real trouble if you become surrounded, and balancing its use is key.
    Just under an hour in, you are introduced to a new, very important character: Lisa. Lisa has run away from home - which turns out to be a castle with her noble family - where she is kept inside all day, much to her frustration. Her initial naivety and character is perhaps a bit cliché (which you could extend to most of the ragtag team, with the amusing exception of Syrenne) but is nonetheless very satisfying to see two innocent, pure-of-heart characters interact. This sort of interaction is, in itself, a nice break from the platitudes of most similar games, which tend to swerve and veer around the concept of romance awkwardly until the end. It often results in some heartfelt scenes where you feel a genuine emotional connection with Zael and Lisa - again, a nice change of pace from either awkward avoidance or forced romance.

    [​IMG]
    This engaging "connection" extends to the rest of the cast, too, though not romantically (unless you're one to fantasise). I was going to single out one of the characters for regularly dropping cutting jokes and remarks, but really, they all have their moments and the oscillating relationships between characters result in often surprisingly funny moments of cheeky jabs and natural banter - conveyed through what is also surprisingly good British voice acting. It gives the game a bit of character compared to the often Americanised characters and while there will always be those unsatisfied with anything but the original Japanese voice acting, the voice acting was generally praised and the characters were often highlighted as a key part of the game's personality. Zael's occasionally childlike innocence, Dagran's cool and collected leadership, Syrenne's alcoholic antics, Yurick's strong and silent nature with occasional wisecracks, Lowell's smartass comments and flirts and Mirania's special relationship with nature all come together to give the cast moments of bona fide hilarity for the player. In short, The Last Story's assorted team provide genuinely funny moments with their unique chemistry, and the story - while perhaps hinging on a few unoriginal tropes from time to time - does a good job of driving the player forward.

    Gameplay

    The gameplay in The Last Story is really quite unique. It blends a Gears of War style cover mechanic - which rewards leaping out at unsuspecting enemies with surprise attacks - with gameplay straight out of a sword-fighting action game, without almost any of the superfluous RPG features, and it is definitely more frenetic than traditional turn-based RPGs. As the game progresses, you get the ability to stop combat to issue orders to your crew, be it offensive or support magic from Yurick, Lowell or Mirania or charging into action with Dagran and Syrenne. The cast varies and the special abilities developed later on does add some satisfaction to using Command Mode to activate them. The game, by default, attacks automatically once you walk up to an enemy, and while this sounds like it takes getting used to - it does - it becomes second nature after a little while, and the game still gives you the option to manually attack in any case.

    [​IMG]

    The game is quite easy, to be frank. The game rarely forces you down to one or two lives, but on that note - you have five lives before a game over, which refill automatically at the end of a battle as well as your HP. There are certainly flashes of challenge when you are forced to use a smaller squad of maybe two or three people, but for the most part, you will rarely see a game over screen. That doesn't mean it's unenjoyable, though; the game is often very satisfying. On a related note to gameplay, the game is very linear and if massive worlds with complete freedom are a big deal for you, then you won't appreciate The Last Story. The environments are well crafted, but often you are just travelling from area to area within Lazulis town and castle. You only go outside of the town for a total of a few hours on a ship and then onto the Gurak homeland. It's not a bad linear structure, but it is definitely linear. The sidequests are usually quite barebones, though there are occasionally optional chapters which are worth playing. If this bothers you, it will probably be a recurring criticism of this game, but it is worth stressing that the locations you travel through are related to the plot and never feel like you shouldn't be there.

    Design

    The visual direction of The Last Story is well thought-out and fits with the story the game is trying to tell. If the colour and lighting were slightly different, Lazulis Island would be beautiful, but the island is caked with a burning glare which, while offering its own aesthetic, does reinforce the fact that Lazulis is no haven. While the game gives you a lot of control over your clothes as you advance, the mercenary band's clothing is appropriately unimpressive. Dagran and Syrenne don a tattered hunter suit that looks like a ripped bib, Lowell's most defining piece of clothing is his scarf, Yurick is wearing a short blue jacket with a plain undercoat and Zael and Mirania wear a pretty insipid all-black outfit with dashes of gold. It is perfectly representative of their social status; they are down in the lower ranks of society, working as frowned-upon mercenaries to get by. When they get their job for guard duty in Lazulis castle in the opening hour or two, they stand out as lower class than the knights. Needless to say, the game looks very good for Wii - and while there are moments of slowdown, the game is usually fine in terms of performance. Textures are a mixed bag, with some being very high resolution (one sky from memory was a combination of two 1024x1024 textures, the highest the Wii supports) and some are simply muddy looking, high resolution or not. The game received some flak on release for its visuals and performance, but it did its best to get the most out of the Wii - which was aging hardware by launch, let alone 2011. Dolphin allows the game to truly shine, but it is perfectly fine enjoyed on a CRT TV as well.

    [​IMG]
    This is drifting into story territory, but it is important to mention: Dagran swears he will make knights of his team, and Zael truly believes him. As events unfold, Zael and co. win noble Count Arganan's favour and rise up the ranks, but the game makes a point; even by accepting the system and doing as you are told, you are feeding the system and validating it. Zael and his team are indirectly causing misery for the lower class he came from. The tragedy is that Zael is too inexperienced and young to realise this; he does what he has to in order to rise up the ranks and make a living. But as his power, The Outsider, makes clear with its growing intensity, he cannot continue to rise inside the system while being the "Outsider" and while retaining moral integrity. The plot, at least for almost all of the game, makes an effort to form a convincing argument against the power system, corruption, greed-driven conflict and divisions in society and politics. The island deteriorates due to human intervention and conflict causes the game's design to alter radically and irreversibly; food and other goods fluctuate in value, allowing cunning players to profit from necessity in a turbulent war economy. Familiar environments are scarred by war, and even characters evolve throughout the conflict.

    The game, save for the final few hours, is meticulously designed and what the game does offer is a solidly designed package. However, some aspects fall short. Sidequests, save for the few side-chapters, are uninteresting and unrewarding; scripted events sometimes need to trigger to allow you to progress, sometimes without indication on how to trigger them, and as I have hinted at, the game's ending seems to defeat what the game stood for up to that point. It's these few problems that prevent The Last Story from being a truly special game. For me, these were made all the more frustrating because I was loving this game and could not get enough of it, and I wanted to overlook these flaws but my second playthrough only made me pay more attention to them. The game is only 20-25 hours long, for an average playthrough; which is a nice change from traditional RPGs which required a massive commitment closer to 200 hours than 20, but some diehard RPG fans might be disappointed in the length. There is at least additional replay value to be gained through the online mode, which offers co-op and versus multiplayer for up to 6 players -- and while this still works thanks to Wiimmfi and other revived services, you'll be hard pressed to find many players as I found for myself. (I'm working on it!)

    [​IMG]

    CONCLUSION:

    STORY - 8/10:
    Potential to be very unique and special, but the game scraps everything it stands for in order to meet the status quo and give a satisfying ending
    GAMEPLAY - 9.5/10: Exceptional - very enjoyable, unique and a refreshing change of pace
    DESIGN: 9.5/10: A game helmed by the creator of Final Fantasy could hardly fail to deliver in terms of design, and it is one of Sakaguchi's most intricate works yet, despite its short length and strict linearity

    OVERALL: 9/10 - Amazing

    The Last Story has its fair share of problems and unfortunately, these problems are big enough to warrant a meaningful deduction from the final score. However, if you are willing to overlook these flaws, what you will find beneath the scratched surface is a special game. It is a game that, despite its problems, offers rewarding gameplay and characters that create engaging bonds which come together to form an experience that is, even with its problems, a great experience. If you have forgotten about your Wii or your Wii U, or even have the game in your backlog backed up for Dolphin, it is worth dusting out your old system for one Last Story.
    TheMrIron2 To clarify, my intention here is not to write a review of this game. This is a summary of the game, if anything. I've looked all over Discord for players to play this game online, to no avail - and this game is an absolute gem for anyone interested in action games, RPG games or story-driven games. The game's 6-player versus and co-op functionality both work online via modern-day Wiimmfi; however despite my best attempts throughout this week, I have found nobody who's able to play.

    Why am I writing this then? To cut a long story short, since you didn't click on this for a review, The Last Story is a good game. It's actually very good. After playing it through, I would say that the game has its flaws but what it does right, it does so well that it overpowers its drawbacks. In some respects, The Last Story still has not been emulated by any other game and remains a unique mix of RPG and modern, real-time combat with a story that forges bonds between the characters which make the player feel a true connection with them, woven into some really nice hand-crafted environments from castle to caverns. But while I recommend everyone with a Wii at least gives this game a try, that's not exactly why I'm writing this blog post.

    [​IMG]
    [​IMG]

    This blog post is, honestly, a bit of a last resort to see if I can find anyone interested in playing The Last Story online with me. If you're on the fence, practically any reviews online will tell you that this game is good. It was made by the creator of the original Final Fantasy with the composer from that game as well, and focuses on throwing himself back into that mindset when he created FF - but with experience, new tools and new ideas. So as you can tell, this game is at least worth looking into if you have a sliver of interest. You can play the game online using an easy-to-use Wiimmfi patcher of your choice, whether that's patching an ISO or using an on-the-fly disc patcher.

    So to summarise, if you have even a small interest in this game, I recommend you give it a try. It's worth dusting out the Wii (or Wii U!) for, and I'd love to be able to play with more people online in a game which, after I decided to try it (as someone who is usually not a fan of RPGs) I got hooked until the end, and I'm sure there are more people out there who will have a similar experience if they give it a try. It would be really nice to play this game online, even once, and I'm sure it would be fun for everyone involved. Let me know below if you're interested!
    TheMrIron2 [Reviewed on Xbox One/N64]

    Rare have made many blockbuster video games over the past 30 years or so. Among the first to come to mind would be GoldenEye 007, Banjo Kazooie, Conker's Bad Fur Day, Battletoads... but one important game is sometimes overlooked, released for the N64 at the turn of the century; Perfect Dark. Perfect Dark took the core of GoldenEye and upgraded it, pushing more things out of the N64 than arguably any other FPS game on the platform. It added "Simulants" - bots for multiplayer - as well as a high resolution mode, support for 5.1 surround sound, dynamic lighting and so many more new things that Rare employees estimate that only 30% of the GoldenEye engine was left, to provide a basic framework for Perfect Dark.

    [​IMG]

    Story

    Rare were offered to make another 007 game after the rapturous sales and reviews of GoldenEye, but the team didn't want to make another Bond universe title off the back of GoldenEye so they politely declined. Bolstered by GoldenEye's success, Perfect Dark began development as a cyberpunk, dystopian sci-fi shooter in which humans and aliens end up working together to destroy a mutual enemy. The game centers around Joanna Dark (coined from "Jean D'Arc", or Joan of Arc) who is working for Carrington Institute and is determined to stop Cassandra and the dataDyne team. The motivation behind this is a galactic war between Maian aliens and "Skedar" aliens, with Carrington supporting the Maians and dataDyne supporting the Skedar, both in exchange for rewards to become the most powerful corporation on Earth, with some twists opening up along the way explaining the threats of the Skedar in more detail - which I won't cover for obvious spoiler reasons.
    [​IMG]
    [​IMG]

    Gameplay

    Perfect Dark's gameplay is exactly what made GoldenEye great, but even better. In the general gameplay sense, things were vastly improved; dynamic lighting allowed you to shoot out lights and change the appearance of the whole scene, 5.1 surround sound and "high res" mode (on top of 16:9 support) as well as more detailed animations (such as reload animations) and 45 minutes of scripted, fully voiced real-time cutscenes made Perfect Dark incredibly immersive in a gameplay sense. Multiplayer had more new game modes, a set of challenges and up to 8 bots alongside 1-4 players. The game is much more technically proficient than GoldenEye, with more detailed environments and weapons - including effects such as beautiful and reflective environment mapping, a rarity on N64 - as well as the aforementioned improvements.

    [​IMG]
    One of the other big changes with PD is the secondary weapon abilities. A Laptop Gun can change into a sentry gun simply by using its secondary mode. The Shotgun has a double burst mode, the Falcon 2 has a "pistol whip" mode, the CMP150 has a follow lock-on mode (which, when used correctly, is very effective)... and the list goes on. Even your fists have a secondary mode, namely the "disarm" mode, in which you can punch an enemy and take away their weapon. If you pull out a weapon after disarming them, the enemy will surrender.. usually. Sometimes, the enemy will pretend to surrender and will pull out a pistol or secondary weapon instead. Sometimes, enemies will have short dialogues with each other. Sometimes, enemies will exclaim things like "I don't want to die!" as they get shot. Sometimes, enemies will jam their guns. The personality in every character adds to the immersion of Perfect Dark more than your usual game.
    [​IMG]
    Another big deal is the co-op and counter-op modes. Perfect Dark's campaign can be played with a friend in co-op mode, but what's even more interesting is the campaign's counter-operative mode; one person takes the reins of the main character, Joanna, while the other person possesses an enemy soldier. This makes for a fascinating and fun experience that is quite unlike anything else you'll find.
    [​IMG]
    Design

    Perfect Dark was well aware, during its development, that it was going to be a Nintendo 64 game. For that reason, the art style was never completely realistic; the team struck a balance with a half realistic visual direction, which ended up working perfectly. The game was designed to be a cross between something not unlike Blade Runner X James Bond. For the remake, this same direction was cleaned up but still maintained, and this is ultimately a decision that benefitted the remake.
    [​IMG]
    Weapons looked futuristic without looking ridiculous (did someone say Call of Duty: Infinite Warfare?) and some weapons, like the Falcon 2 and the K7 Avenger (below), were made up of largely reflective surfaces (achieved using actual environment mapping) which glistened in the dynamic lights and really looked better than any low-resolution texture could have made them look. Some of the textured weapons also look nice, however, such as the Dragon - though reflective weapons were more common to emphasise the lighting.
    [​IMG]

    The overall feel of the game was absolutely nailed. Whether you were infiltrating a corporate conference or on an alien planet, Perfect Dark's tone never set a foot wrong. Planets felt suitably foreign, but never too weird, and unnamed government research centers are set up convincingly. This is one of Perfect Dark's greatest strengths; almost every level has a vastly different setting, and there is always a sense of variety and originality when going from level to level. Speaking of different settings, the game's replay value is ridiculous. Playing a level on "Perfect Agent" will add many more new scripted sequences and objectives compared to the basic Agent difficulty, so combined with the labyrinth of a multiplayer mode, Perfect Dark will keep you coming back.

    [​IMG]
    Unfortunately all of this design took a toll on the original N64. On an Xbox 360 or Xbox One, you are treated to a crisp 1920x1080p presentation at an unwavering 60FPS during normal gameplay with the Xbox Live Arcade ["XBLA"] remake, but on N64 the frame rate could consistently be 20FPS in some places. Not to say that the game hardly hits 30FPS, but many levels struggle under the weight of N64 limitations and for some people, this could make the game feel less responsive and a bit sluggish. It's nothing unbearable - unless you try 4-player split screen with bots - but the game's frame rate issues were cited in many of even the overwhelmingly positive reviews. Additionally, some levels make you resort to trial-and-error to figure things out, which can be frustrating. The design is not perfect, but it's undeniable what the team at Rare had achieved with Perfect Dark in an artistic and design sense.

    CONCLUSION:

    SCORE
    [N64]: 8.5/10
    SCORE
    [XBLA]: 9.5/10

    Perfect Dark reminds us of what makes a great game. Not flawless, but Perfect Dark is easily one of the best shooters on any Nintendo console to date and is arguably one of the best shooters on any console to date. The N64 version loses points for its frame rate which, while you get used to it, could make the game feel sluggish as one or two levels rested in the teen region for frame rate. The XBLA version gets bonus points for being an incredibly crisp and faithful remake which really brings out exactly what the designers wanted from the N64 classic, with the addition of online multiplayer.

    The bottom line is that regardless of platform, Perfect Dark is an experience that anyone who calls themselves a fan of sci-fi games or action games must play. If you are willing to overlook the imperfections, you will not regret picking up this game, and if you like collecting (or maybe just playing) Nintendo 64 games then you owe it to yourself to pick this one up. Otherwise, PD is $10 on Xbox Live Arcade; if you like sci-fi or action games, that's a bargain for one of the all-time greats of either genre.

    Hope you enjoyed the review. If you're a Perfect Dark fan, consider checking out Perfect Dark: Reloaded for PC and PSP - we're making big strides with the project and we'll have some news to show soon enough. Otherwise, tell me what you think in the comments below. See you in the next blog!
    bandithedoge, VinsCool, MCNX and 2 others like this.
    TheMrIron2 [DISCLAIMER: This is not a rickroll. He does make actual music, which may come as a shock to millions of oblivious rickroll victims.]

    So I decided to try something different as tech blog upon tech blog was becoming monotonous for me to write up. I've decided to try something more musical, a review of Rick Astley's latest album which I've been listening to, dubbed "Beautiful Life". This is Rick Astley's second album since making an official comeback in 2016, with "50", and while not achieving the same success as his 80s superstardom Astley did reasonably well in the UK, charting at 7th place - just below Drake's Scorpion album at release (July 13).

    So how has Rick Astley changed over the last 30 years? Well, he's changed a lot. His deep voice is still there, but it has a newfound hint of maturity and warmth that was missing in his colder 80s music. Beautiful Life was written and produced entirely by Astley in his home studio, which is quite an accomplishment. But how has he evolved since 50 - his 2016 album written and produced by himself as well - and what has he done differently? He studied 50 - which was in itself a jumble of experiments that I'll cover another time, if this blog series becomes a thing - and found that people liked his more upbeat songs than some of the soulful (gospel, even) tracks, and that even his older listeners liked to dance to his songs. So he made a less risky collection of pop tunes for Beautiful Life, and the end result is a low-risk album which is safe but, in parts, flat. (Remember to check the hyperlinks if you want to listen to the songs!)

    The song opens with the catchy title track "Beautiful Life", a song which despite repeating nothing but 3 chords for nearly 4 minutes works quite well as a simple pop song. It's catchy and nothing too complex. It gets tiring after a while, but it's a decent song all things considered. The album then moves on to the even better "Chance to Dance", an upbeat dance song with a good bass line and a nice melody. Unfortunately, the album starts to waver from this point.

    "She Makes Me" is a solid track in an album which suffers from very unvaried tracks throughout a good two-thirds of the album. It's original and upbeat as well, but it has its own identity and feel as a song. "Shivers" is another song which is catchy but fails to really capture anything or make any meaningful impact - not the last song on the album to do this. "Last Night on Earth" is a song that attempts to replicate She Makes Me and Chance to Dance's trick of starting slow and quiet and building into a loud chorus, but even Shivers did a similar trick and by this point the listener is not really interested as the chorus rolls past again and again.

    Unfortunately, this middle section beginning with "Last Night on Earth" (or maybe even Shivers) is the entire album's weak link. These songs all try to - or perhaps unintentionally - be very similar, as after my first listen through the album a few of these tracks began to blend a bit as I struggled to remember these songs due to their indifferent sound. "Every Corner" is another song which may have had potential if it wasn't put in with the rest of the similar pack of songs, and comes off as a bit of a chore while listening to the album - there are two verses before the chorus repeats for the majority of the song, of which most lines are "Yes I love, I love you blind, in every corner of my mind" with a bridge that barely attempts to change the song. This is probably the part of the album where you're tempted to call time or take a break because they're all similar, but fortunately it picks up after this.

    "I Need the Light" is very similar to "Every Corner" in terms of the verse/chorus placement but it sounds better for reasons I can't put my finger on. At this point, the album has an overdue attempt to shake things up. "Better Together", while nothing revolutionary, attempts to vary the album a bit with more of the same spoken verses, but the chorus has a more original approach switching between Rick's light-sounding backing vocals and his more grounded forefront vocals. It's not much, but it's a change of pace when the chorus takes up the majority of the song. From here on out are some of the best tracks of the album, however.

    "Empty Heart" was a nice breath of fresh air for the album, with a slightly more inventive chorus, nicer verses with some soft crooner singing and a bridge with an interesting chord progression. This song is well placed as it brings your attention back for the last push of the album. "Rise Up" can feel a bit directionless at times but featured some nice, gentle low singing mixed with a soft, bright and higher voice. The bridge was actually quite nice as well, and you come out of the song feeling like it was a good chill song, if nothing special.

    Next comes the best song of the album, in my opinion, with the inspiring and charged "Try" - one of his best songs since his return without a doubt. A strong vocal performance spanning two octaves (from the verse's low and solemn tone, going down to A2, and his fierce chorus reaching an impressive belted A4 - the second highest note anyone classed as a "baritone" can belt) with a strong instrumental backing track made Try a more engaging listen than arguably anything else on the album. Once I reached the second chorus and he begins to belt higher, I was almost taken aback; this was a note he had not hit in any album since 1993, and it was an excellent belted note with good strength, grit and control - completely out of the blue! Granted, as someone who does vocals, I am naturally paying more attention to the singing and range than an ordinary listener, but even without paying attention you could pick up on the emotionally charged singing in the song. A highlight of the album and a worthy single.

    The album fades off with the interesting "The Good Old Days". It's an experimental song in a different key to anything else in the album (F major, whereas many songs in this album were in C major/A minor or G major) and has a unique feel to it which I thought was actually quite nice. It wasn't a breathtaking song, but it was definitely a more varied and different song and was a nice way to end the album. I just wish he had tried the same level of experimentation with the middle third of the album.

    Song Ratings
    Beautiful Life: 6.5/10
    Chance to Dance: 7/10
    She Makes Me: 6.5/10
    Shivers: 6.5/10
    Last Night on Earth: 5.5/10
    Every Corner: 5.5/10
    I Need the Light: 6/10
    Better Together: 6.5/10
    Empty Heart: 8/10
    Rise Up: 7/10
    Try: 8.5/10
    The Good Old Days: 7.5/10

    Overall Rating: 6.5/10

    Conclusion

    Astley's latest album is a collection of songs which unfortunately don't try to be anything more than "another song" for about half of the album or so. While perhaps more enjoyable on their own, many tracks feel similar and there is an unavoidable feeling of repetition throughout the album up until Empty Heart. There are some good songs in here, but the fact that you have to sit through a collection of mediocre songs lowers the album's overall rating and dampens the potential of the better songs.

    Thanks for reading this review, and if people like these, I might even do more. I just felt like doing something outside of tech would be beneficial. Let me know in the comments if you have anything to add about any of these songs, and if I made you discover that Rick Astley is not dead, then like this post for learning an invaluable piece of information. Thanks for reading!
    bandithedoge, sks316, ry755 and 2 others like this.
    TheMrIron2 So before I do another Tech Talks post, this is something I've been wanting to do for a while, and now that I've at least used all PSP models I think I'm in a good position to talk about all of them. I'll detail the pros and cons of each model with a summary paragraph about each.


    PSP 1000

    Pros:
    - Bulkier build, preferable for some
    Cons:
    - 32MB RAM (vs. 64MB in later models)
    - Screen "ghosting"
    - No USB charging
    - No TV output

    We'll start right off the bat with the original PSP-1000. The PSP 1000 (1K) is known as the "fat" PSP by many players. It was the very first PSP model released, which is a good and a bad thing. It is the least "portable" of them all, being 23mm thick and 280g in weight (though some find the extra chunkiness better to hold) but it is also easy to see why many still like it. It is the original PSP and some prefer the weightiness of the fat models. However, it is important to note its objective disadvantages. The 1K's screen suffers from a ghosting problem, where slow response time makes our eyes perceive a blurriness particularly in motion. The 1K also has half the RAM of any other model, with 32MB; fortunately, this is not a problem in games as games are restricted to using just 32MB. However, later models use the extra memory as a cache for UMDs, improving load times, and the memory also noticeably improves the web browser. Some homebrew programs also rely on the extra memory of later models, but it isn't a deal breaker most of the time. It does eschew some later model features like USB charging but it is a perfectly functional PSP model and still a favourite for many PSP fans.
    [​IMG]

    PSP 2000

    Pros:
    - Good screen
    - Thinner and lighter
    Cons:
    - "Cheaper" feel (vs 1000)
    - Video output in-game limited to 480p (240p for PS1)

    Another fan favourite, the PSP 2000 is a good middle ground between the 1K and 3K, which has a screen which is less problematic than either of the aforementioned models. It sheds a lot of weight, clocking in at almost 100 grams less (189g) and being ~5mm thinner (18.6mm). The 2000 also changed the serial port to accommodate a new TV output mode with official video cables. This supports 240p, 480i and 480p, very ideal for watching movies or playing PS1 games at 240p. However, PSP games - which are 272p - are letterboxed (ie. surrounded with a black border) and require zooming in for a full screen picture, and on the PSP-2000 games must use the 480p mode (though non-games can use 480i as well).
    The screen was much improved, fixing the ghosting on the 1000 and making the screen much brighter overall. It was overall a good, if imperfect, upgrade over the 1000.
    [​IMG]
    PSP 3000

    Pros:
    - Vibrant screen
    - Small redesigns
    - Microphone
    Cons:
    - "Interlacing" or "scanline" screen issue

    The PSP 3000 was not as marked an improvement as the 2000 and is seen in a slightly less positive light by some due to a widely known screen issue. The 3000 received touchups around the place such as a reworked disc tray, as well as allowing games to use 480i in TV mode, opening up more use for composite cables or CRT TVs. The 3000 even received a screen with improved anti-glare and an increased colour gamut, with 5x the contrast ratio. The screen, however, was subjected to a notorious "scanline" issue; the screen could sometimes be plagued by interlacing like artefacts, most noticeable in motion. This is a bug with the screen, and Sony were powerless to fix this issue. It's a solid model, and one many will recommend; but the scanline issues irk others and as such, some would say a 2000 is a safer bet.
    [​IMG]
    [​IMG]
    PSP Go

    Pros:
    - PS3 controller support via Bluetooth
    - Compact design
    - Best screen
    - 16GB internal storage
    - All-in-one video output and charging cable
    - Official "dock" for playing on TV and charging
    - Ability to "Pause" game (like a save state)
    Cons:
    - No UMD drive
    - Battery not removable
    - More cramped controls
    - Proprietary memory stick and cable, different to other models

    Written off as "the digital only one", the PSP Go's popularity has risen since its initial commercial failure. Released in 2009, the PSP Go (aka the PSP N1000) was a significantly more compact version of the PSP that was arguably too ahead of its time for its own good. Featuring support for wireless PS3 controllers, a flawless and more dense screen along with a more pocket friendly design, the Go looked good on paper; but at the time, the lack of UMDs disappointed. The PSP Go could be played with folded out controls like a normal PSP, or it could be folded and played similar to Switch's "tabletop mode" with connected DualShock 3s, or it could be "docked" and connected to a TV with a controller.. I'm sure you see the pattern anyway. It had 16GB of internal memory (expandable) to try and offset the lack of a UMD drive, but it faced problems. First, PSP didn't support WPA2 like modern routers - only WEP and WPA. This meant that you needed to use a less secure router or an open hotspot to use the internet or download anything. Secondly, the controls were smaller and led to cramps for some; connecting a DualShock 3 worked, but you needed a PS3 (or MotioninJoy, unofficially) to sync the two, and not everyone had one available. Still, the concept was fascinating and it has been gaining traction since its doom as a go-to, pocket friendly homebrew device.
    [​IMG]
    [​IMG]
    PSP Street

    Pros:
    - Matte design for better fingerprint resistance, doesn't get dirty easily
    - Better battery life than other models
    - Good screen
    Cons:
    - No AdHoc or WiFi
    - One speaker

    Perhaps the most interesting model of all in its own way, the PSP Street (aka PSP E1000) was Sony's last PSP model as a budget friendly device in anticipation of the PS Vita. It cut out WiFi as many services were shutting down anyway, and it stripped off one speaker. The absence of AdHoc is a shame as well, but this comes with a lack of any wireless. However, I'd like to point out that the PSP Street still has perfectly fine mono sound with speakers, and supports stereo audio as normal via headphones. What's funny is that the Street's screen is actually quite good - without the issues of the 1000/3000, with Eurogamer saying it was "easily" ahead of the 3000, with more vibrancy and no ghosting compared to the 1000. The Street also has a battery life advantage as a result of the stripped wireless and speaker, but the battery isn't removable, making the Street a very mixed bag.
    [​IMG]

    Conclusion

    So, what's the best PSP? It all comes down to preference. Some people will say the PSP 1000 because of its premium, bulky feel. Some people will say the PSP 2000 because of its improvements over the 1000 without the issues of the 3000 or feature cuts in the E1000. Some will say the 3000 because of its vibrant screen. Some will say the Go because of its form factor, screen and controller connectivity. Some will even say the PS Vita in Adrenaline mode is the best PSP! Regardless, it is entirely subjective which is the "best", as there is no one perfect mode; all of them have pros and cons, and that's the bottom line with all PSPs. Thanks for reading.
    CallmeBerto, Manana, Sinon and 5 others like this.
    TheMrIron2 The very first games consoles did not think very much of resolutions. Indeed, unlike our modern displays, CRT TVs were not fixed resolution and the "standards" for resolutions varied and were sometimes arbitrary. At the time, this wasn't really an issue; developers were more concerned about how they managed their sprites than the overall pixel counts. However, flat panel TVs started to change all of this; I'll be covering everything from the low resolution consoles of old, the old handhelds' resolutions and even some newer, surprising low resolutions.

    The first big games console was the Atari 2600. Unlike its predecessors like the Magnavox Odyssey, the 2600's success was rampant and during its prime it was making waves unlike almost any other console for many years. Back then, video games had a very distinct, blocky look - which could be covered in an entirely separate article. Regardless, take this screenshot to illustrate if you don't know what a 2600 game looks like for whatever reason.
    [​IMG]
    The 2600 had a maximum resolution of 160x192 (which could only really be achieved with some programming hacks anyway) and there were many restrictions as the Atari 2600 was effectively designed to be a "Pong" machine; you could only have 2 8-bit sprites, 2 missile and 1 ball object on one scanline. Despite this, developers have found fascinating ways of squeezing great visuals out of this 1MHz "Pong machine" with 128 bytes of RAM and cartridges no larger than 4KB; look no further than the stellar remake of Donkey Kong, called "Donkey Kong VCS", made by homebrewers in 2013.

    Enough about the 2600 though - that's for another article. The next big milestone was the Nintendo Entertainment System, or NES. Released under the name "Famicom" in Japan in 1983 (releasing elsewhere in the following years), the NES was a substantially more capable system than the 2600. Capable of 8 sprites per scanline, 60 frames per second graphics, exponentially more colours and massive cartridge sizes (the largest licensed Famicom release was a 1MB cartridge, with unofficial games going as large as 64MB!), it was an eye-opening upgrade. The NES ran at 256x224 with a 1.79MHz CPU and the graphics it pushed out were simply a massive upgrade, and with classic games such as Super Mario Bros, Zelda and Contra, the NES revolutionised the industry forever.
    [​IMG]
    The Master System was a similar story, and even the SNES offered a 256x224 resolution as standard, with a limited 512x448 even possible. However, the SNES had 32,768 colours compared to the much smaller palette of NES games. The Mega Drive actually one-upped the SNES in this regard, offering a higher standard resolution at 320x224 (320x240 in PAL territories) - few arcade boards at the time could produce this resolution! However, it only had a palette of 512 colours, of which 61 could be displayed at once (though this is increased using tricks such as dithering).
    [​IMG]
    Around this time, the first portable systems also started to emerge. Nintendo introduced the GameBoy; a simple monochrome device with a 4MHz CPU, 8KB RAM, hours of play time and graphics somewhat comparable to the NES. The GameBoy was 160x144 in resolution and had a long battery life, and was a big hit for bringing classic console games to the portable arena.
    [​IMG]
    However, the GameBoy was not alone. I'll touch on one of many of the competitors of the time; the Atari Lynx, launched in the same year - 1989. Equipped with a slightly lower resolution screen, 160x102 (but a wider aspect ratio), two 16-bit chips @ 16MHz each, a dedicated "Blitter" chip for doing all imaginable sprite effects, a 4MHz DSP and a staggering 64KB of RAM, the Lynx was out of this world and was closer to a Super NES than a GameBoy. In fact, the Lynx was capable of more advanced sprite effects than the SNES, and could do SNES's famous "Mode 7" effects better than the SNES itself! Unfortunately, the Lynx was too far ahead of its time for its own good; this astonishing hardware ate through batteries much faster than the GameBoy and offered less battery life, ultimately dooming it to second place compared to the much more battery-friendly GameBoy.
    [​IMG]
    There were a few consoles between the SNES and N64 era which are often overlooked. The Atari Jaguar, Atari's miserable commercial failure of a final console (<250,000 units sold) was in fact an incredible - if complex and flawed - piece of hardware. (Another article, perhaps?)
    Capable of over 16 million colours and true 3D rendering with techniques such as Gouraud shading, the Jaguar was unfortunately stuck between 2D and 3D consoles; it was more geared towards 2D as it was believed 2D would maintain an appeal for the coming years and 3D was not quite viable in 1993. The Jaguar was incredibly capable - outdoing even idTech programmer (responsible for a lot of Doom, Quake and Wolfenstein 3D) John Carmack's own computer at the time; however, it operated differently to other machines. It didn't have a typical "frame buffer"; simply object lists and minimum/maximum X and Y values. It output at 320x224, but it was possible to achieve up to 1400x480 @ 60Hz or something in that region (however, to use 24-bit 16.7 million colour mode, the horizontal resolution could not be any higher than 720) and 1280x480i is a working VGA mode, though the aspect ratio is.. a little unreasonable.
    [​IMG]

    Enter the true advent of 3D graphics on consoles. The Nintendo 64 and PS1 both had a standardised resolution of about 320x240. However, the PS1 and N64 had "High Res" modes which were typically 320x480 and 640x240 (even 640x480, on either platform, though VERY few games used this) respectively. The PS1 and N64 had different problems, however; on the N64, a 4KB texture cache meant that coloured textures (4bpp) could be no bigger than 32x32; (32 x 32 x 4) takes up all 4096 bytes. 2bpp (2 colours, ie. black and white) textures could be 64x64, however. Games worked around this and many other of the N64's problems using various techniques which I won't touch upon here.. yet ;)
    The PS1 had less memory and due to this, oddball resolutions like 320x480 and 512x480 were common for "high res" games. The Saturn was no stranger to weird pixel counts, either, with 704x448 (compensating for overscan on both sides to be effectively 720x480).
    [​IMG]
    The Dreamcast was the first console to truly standardise a resolution such as 640x480. Games run at this resolution with no notable exceptions and most support 480p via VGA, with 480i/240p games being the exception rather than the rule. It was capable of downsampling from higher resolutions than this through Super Sampling Anti Aliasing (SSAA) but officially, 640x480 was the standard, with 320x240 being the alternative.
    [​IMG]

    The PS2 is where things got a bit more interesting. Developers effectively had free reign over their resolution, which could be anything from 256x224 to 1920x1080. Yes, 1920x1080 - 1080p! Most PS2 games were either 640x448 or 512x448. However, some were exceptions; ICO was 512x224, while games such as BLACK switched to a proper 640x480 in progressive scan (where overscan was not an issue) and some games like Gran Turismo 4 even used a mode estimated around 640x540 in 1080i output mode.
    [​IMG]
    The GameCube was similar to the Dreamcast in that anything but ~640x480 was unusual. The Xbox was also like this, but it also had some games that ran at a full 720p! The Xbox's memory allowed it to use such high resolutions in a select few titles which opted for this mode, and it supported 1080i output as well. However, these games didn't tend to run at such high resolutions. Many were simply upscaled from 640x480.
    [​IMG]
    The PS3 and 360 is where things got interesting for the last time in the home console scene, really. The Xbox 360 had a 10MB pocket of very fast VRAM, so fitting a frame buffer in here helped to make post processing and MSAA very cheap. However, the highest realistic resolution you could use here was approx. 1024x600, or 880x720. The COD games used this with MSAA to make things clean, with the 360's specialised upscaling chip scaling the final image to the output resolution. The PS3 was weirder; the PS3 did not support AA on resolutions unless they had a vertical resolution of 1080, and the PS3 only supported 960x1080, 1280x1080, 1440x1080 and 1920x1080 as these resolutions. Any game below this had to be software scaled to this resolution.
    [​IMG]
    The PS4 and XB1 use pretty straightforward resolutions.. unless you're counting the Pro/One X models. While the base models stick to fairly standard resolutions such as 900p and 1080p, the PS4 Pro in particular tends to use resolutions such as 1920x2160 to give half-resolution 4K which is reprojected into 4K output. This generally works fine but can have issues with rectangular or non-square pixels. The One X tends to be powerful enough to avoid these techniques, but instances can occur of checkerboarding and similar techniques on One X as well.
    [​IMG]
    The Switch is a totally different story altogether. The Switch is not very comparable to the PS4/XB1 and is closer to the PS3/360; this is where things get very curious. It is close enough to stay relevant, with modern GPU features and such, but it is not actually very powerful. Games often use dynamic resolution as especially on a small screen, you tend not to notice. Wolfenstein 2 goes as low as 360p, which doesn't sound great, but the most glaring example yet comes from ARK: Survival Evolved. Averaging at 360p-432p - docked - the game can go as low as approximately 304x170 even docked, making it the lowest resolution Nintendo home console game to date. In 2018. It averages at 212p in portable with figures too low to report as well.
    [​IMG]
    Where are resolutions likely to go from here? Microsoft and Sony's upcoming consoles are likely to be consistent in achieving 4K, but for Nintendo there is really no telling what direction they will take based on their avoidance of technology trends. Handheld resolutions are a different story altogether, as past a certain pixel density there is no use in using such high resolutions; 720p is already somewhat pushing it on Switch. Time will have to tell; thanks for reading this long post! If you enjoyed it, leave a like and some feedback.
    VinsCool, kikongokiller and ry755 like this.
    TheMrIron2 The PlayStation 2 is the best selling console ever. But what people forget, and what people often give it unfair criticism for, is what it had underneath the hood. The PS2 had, in some ways, some of the most advanced hardware of its generation - but it is often slammed for being “weaker” than the GameCube, Xbox or sometimes even Dreamcast.

    HARDWARE:

    CPU:
    • 294/299MHz MIPS IV/R5900 based CPU core
    • Vector Unit 0 (VU0) - 147MHz vector coprocessor tightly tied to main CPU core, often used for polygons, geometry transformations, physics etc (either in parallel or serial)
    • Vector Unit 1 (VU1) - 147MHz vector coprocessor which is closer to GPU but still parallel to CPU; often acts as a geometry pre-processor, and during serial VU0 operations the output from VU0 is sent to VU1 to do base work eg. camera, movement
    • Two different ways of performing operations with VUs
    • Parallel: Results of VU0 sent directly as another display list
    • Serial: Results of VU0 sent to VU1
    GPU: Custom 147MHz “Graphics Synthesizer”
    • Variable resolution from 256x224 to 1920x1080, interlaced or progressive
    • 4MB embedded memory (32MB of main memory may be used to store textures)
    • 48GB/s bandwidth, 2560-bit bus
    • 2.352 gigapixel/s (1.2Gpixels/sec with Z-buffer, alpha and texture)
    RAM:
    • 32MB RDRAM @ 3.2GB/s (main)
    • 2MB audio memory
    • 2MB I/O memory (4MB in later slims with PPC Deckard I/O)
    Other:
    • 37.5MHz MIPS I/O processor (PS1 CPU at different, variable clockspeed)
    • Image Processing Unit (IPU) for DVD/FMV playback; native hardware MPEG-2 support

    PERFORMANCE:

    CPU:
    • Floating point: 6.2GFLOPS
    • Geometry transformation (VU0 + VU1): 150 million vertices/second
    GPU:
    • 75,000,000 polygon/s (no texture, flat shaded)
    • 37,750,000 polygon/s (1 full texture with diffuse map and Gouraud shading)
    • 18,750,000 polygon/s (2 full textures with diffuse map, Gouraud shading, specular light/alpha or others)

    So as you can see, the PS2 was no slouch. So what happened?


    THE PROBLEM(s)

    The biggest issue for the PS2 was that it required a different approach to game development than usual. This made porting difficult, because often code required drastic reworks. Take an example from early in the console’s life of an exclusive game and a ported game; Burnout and Gran Turismo 3.

    [​IMG]
    [​IMG]
    The difference is very clear; games designed for the PS2 from the ground up fared exponentially better than those which were designed elsewhere, and ported. But something still doesn’t add up; even some exclusive games don’t look as good as their GameCube and XBOX equivalents. So what gives?

    The reason for this is because for a long time, the VUs had to be programmed in its own unique assembly. This made them very tough to utilise well. However, eventually tools were released to utilise the VUs using the C language and combined with general updates to development tools and developers sharing tricks, PS2 results improved dramatically from year to year. By 2005, the PS2 was often visually indistinguishable from the GameCube and Xbox.
    [​IMG]
    But this wasn’t always the case. The Xbox’s GPU still supported more shading techniques in hardware and had more total memory, so it was common for Xbox games to still have a bit of an edge. This also meant that some Xbox games did things the PS2 couldn’t (to the same extent), such as pixel shading. The PS2 had to recreate these effects in software.

    THE SUCCESS

    So what could the PS2 do right, then? The PS2 was capable of better geometry than even the Xbox and Wii, thanks to the VUs acting as geometry processors. The astronomical bandwidth and fill rate also meant the PS2 was excellent at effects and postprocessing; so much so that MGS2/3’s depth of field had to be cut out of the HD remaster because it was problematic for the PS3/360 GPUs, and Zone of the Enders ran at 1 / 3 of the framerate compared to PS2 (despite many compromises, such as quarter resolution effects) until the effects were reworked completely to be more compliant. The PS3 and 360 could not brute force effects like the PlayStation 2.
    [​IMG]
    One game that exploited the bandwidth/fillrate to the limit was Zone of the Enders, as mentioned. The game prided itself on causing storms of effects and still running at 60 frames per second; no game on Xbox, GameCube or Wii tried anything similar because developers who worked with the PS2 knew it wouldn’t stack up.
    [​IMG]
    A game which abused both the bandwidth and geometry capabilities was Metal Gear Solid 3. Easily one of the most advanced games of the 6th generation, MGS3 had incredibly advanced geometry - with every blade of grass in the jungle rendered individually - and a brute-forced, layered depth of field effect that was more accurate than anything until Direct3D 11 introduced proper depth detection.
    [​IMG]
    [​IMG]
    A final example is Shadow of the Colossus, which I covered extensively in a previous blog post. SOTC had real time bone calculations, simulated HDR and 20,000 polygon monsters with layers of fur and one of the most convincing fur representations of its time. It used the bandwidth to slap layers and layers of textures onto a colossus for the rich fur, and it used the VUs to calculate bone structures. It also used a level of detail (LOD) system to make streaming from disc work efficiently. Another very impressive game.
    [​IMG]

    CONCLUSION

    The PS2 was capable of many brilliant things, and while it did fall short in some areas, it made up in others. So next time someone tells you the PS2 was underpowered… you know what to say.


    Thanks for reading this tech talk; if you enjoyed it, leave a like and give me some feedback. You should also check out the previous blog posts if you’re interested. I’d like some suggestions on what to do next, too, but until then thanks for reading.
    CORE, Wolfvak, pasoka and 3 others like this.
    TheMrIron2 The PSP was Sony's first - and most successful - foray into the world of portable gaming. It amassed about 80 million sales and offered a convincingly console-quality experience for much of its lifetime. This post will have a look at what the PSP did right, what it tried and what games showed what the PSP could do. First, a quick spec overview:

    CPU: 1-333MHz MIPS “Allegro” CPU; 1-333MHz MIPS R4000 “Media Engine” CPU (background processes only; non-programmable for commercial software)
    GPU: 1-166MHz “Graphics Core” (2MB eDRAM)
    RAM: 32MB (PSP-1000), 64MB (all later models)
    Storage: Memory Stick based; up to 128GB (possibly further with adapters)

    The PSP 1000 came with 32MB RAM, but later models added an additional 32MB for a total of 64MB; however, this was only used as a UMD cache and as a much-needed expansion for some system software like the internet browser. Compared to the DS, the PSP was a much more capable device and it opened up a lot of possibilities. But outside of the hardware, what did the PSP bring to the table in terms of software?

    SOFTWARE

    The PSP was quite important because it was the first handheld console to have a full GUI operating system on it. The PSP was the first adaptation of the XMB interface to the gaming world (excluding the Japan-only PSX DVR) and the PSP offered a variety of options. It had native support for media formats such as h264 MP4 videos and MP3 audio, just like the codecs of today. It supported PNG, BMP and JPG photos and it also had USB connectivity for transferring movies, music, TV shows and pictures to your PSP. That sounds great, doesn’t it? It gets even better as they added a web browser, Skype and even more other software. So for its time, the PSP had some really revolutionary software.

    [​IMG]
    [​IMG]
    [​IMG]
    PSP GO

    The PSP Go is the “other” PSP model and was an interesting experiment from Sony. It featured the same hardware, but it had a smaller screen (3.8”, compared to 4.3”) as well as fold-up controls, a special "Pause Game" feature (similar to a save state), Bluetooth PS3 controller connectivity, and 16GB of expandable internal storage. The catch? There was no UMD drive. For some, this was a deal breaker, but as UMDs are dying off (they are louder, more battery draining and less convenient than digital copies) the PSP Go has become an increasingly viable option.

    [​IMG]

    The PSP Go also tried something bold; it combined the PSP’s video output into a charging dock, exactly like the Nintendo Switch. It even had a “tabletop mode” with its fold-up controls and PS3 controllers! However, this idea may have come too early for its own good, as the PSP Go never took off. Still, it is fascinating to revisit these experiments of what was ahead of its time and what paved the way for later systems.

    [​IMG]

    [​IMG]
    EARLY USE

    The PSP, however, did not get its full power available immediately. For the first few years, the PSP was restricted to just a 222MHz CPU mode and 111MHz GPU. This caused games such as SOCOM, Star Wars Battlefront and GTA - which all still looked comparable to their console counterparts - to run markedly worse than they should have. This was locked down due to initial concerns about battery life. Still, initial impressions were very positive.

    [​IMG]


    In 2007, Sony unlocked the full 333MHz/166MHz speeds for the CPU/GPU. This was a big deal; games such as God of War immediately set to work utilising it. God of War immediately delivered; it looked extremely similar to the PS2 games, which were already praised as some of the best looking games on PS2.

    [​IMG]


    Some franchises, such as Metal Gear Solid (below: Metal Gear Solid: Portable Ops and Metal Gear Solid: Peace Walker respectively), had games released before and after the clock speeds were unlocked. It is interesting to see what developers did with the 50% increase in CPU/GPU speed; Peace Walker refined the look of Portable Ops and made everything much more detailed, while also improving the framerate. PW oddly retained the cap of 20FPS, but using cheats to unlock this and force 30FPS/333MHz shows the game has no issue at a higher framerate. It is speculated Peace Walker used a middle ground speed - 266MHz - to balance battery and performance, and because the game does not drop a single frame at 20FPS it feels fine to play.

    [​IMG]

    [​IMG]


    ATTACK OF THE HACKERS


    Sony, however, were playing a constant cat-and-mouse game with hackers and were failing to keep the system software exploit-free. Homebrewers did some incredibly impressive things with the PSP hardware with the new exploits, though. An unofficial port of Quake III Arena was made, and a well-optimized, hardware-accelerated and moddable version of the Quake engine was ported to PSP. This was famously the base engine for many PSP homebrew games, the most famous one being Kurok - a fanmade game inspired by Turok: Dinosaur Hunter for N64. Other Quake-based homebrew games include Counter Strike: Portable, Nazi Zombies Portable, Halo Revamped, the cancelled Cause of War: 1944 and the work-in-progress projects Project Frost and Perfect Dark: Reloaded (both of which I’m involved in, on an unrelated note). These could use the whole 64MB RAM of later PSPs to their advantage, leading to more ambitious texture and map work.

    [​IMG]

    [​IMG]
    Among other homebrew projects are a N64 emulator, a Minecraft remake and usual stuff like Linux. The N64 emulator is particularly impressive, and like the Minecraft remake is still actively maintained. The PSP scene still retains a healthy following.

    CONCLUSIONS

    The PSP was a very powerful handheld for its time and it has - especially had, in its prime - one of the most vibrant and ambitious hacking scenes ever seen. It was capable of many impressive feats and remains a fan favourite, even though the Vita and other systems may have superseded it. It remains a great choice as an emulation center, a media player or even simply an all-in-one, cheap gaming and multimedia device.


    Thanks for reading this article, and if you enjoyed this, I’m happy to write more; tell me any feedback you have, including suggestions, as well.
    TheMrIron2 I thought "What is the actual point, there is nothing interesting about me that I've talked about on GBAtemp and this is a dumb trend", but then I realised
    a) No harm in doing so
    b) people can freely ask programming and hardware questions and I love answering those

    So ask away
    TheMrIron2 In the previous blog post, I covered some simple "Hello World" and button input programs to help us test out our new PSP set up. This time, by popular demand, we're diving into 3D rendering! I will be using the ShadowProjection.c demo from the PSP SDK. I am going to avoid the typical cube example, because the manually defined vertices are overwhelming for some.

    Shadows

    Code:
    #include <pspkernel.h>
    #include <pspdisplay.h>
    #include <pspdebug.h>
    #include <stdlib.h>
    #include <stdio.h>
    #include <math.h>
    #include <string.h>
    
    #include <pspge.h>
    #include <pspgu.h>
    #include <pspgum.h>
    
    Here, we set up our libraries for our code. You may recognise some of them, but some may be new. The libraries beginning with "std" are related to C/C++ code and will help us to use more advanced code than you may have previously encountered. math.h is self explanatory and string.h will be too. The final 3 are all related to managing the PSP GPU. Exciting! We're finally harnessing the PSP GPU and its hardware 3D rendering.

    Code:
    PSP_MODULE_INFO("Shadow Projection Sample", 0, 1, 1);
    PSP_MAIN_THREAD_ATTR(THREAD_ATTR_USER);
    
    #define printf    pspDebugScreenPrintf
    
    static unsigned int __attribute__((aligned(16))) list[262144];
    
    typedef struct Vertex_Normal
    {
        float nx,ny,nz;
        float x,y,z;
    } Vertex_Normal;
    Okay! That's a lot of new stuff! Here we're naming our module as usual and defining printf, but we're also setting up an attribute. The use of this will become clear later. We also set up our very first struct, so we can create vertexes at will and define the X, Y and Z of them.

    Code:
    typedef struct Texture
    {
        int format;
        int mipmap;
        int width, height, stride;
        const void* data;
    } Texture;
    Another struct. Now we're defining textures. So when we create a new texture with this struct, we set its format, its mipmap level and its size.

    Code:
    /* grid */
    #define GRID_COLUMNS 32
    #define GRID_ROWS 32
    #define GRID_SIZE 10.0f
    Here we set up a 32x32 grid for our 3D rendering to take place.

    Code:
    Vertex_Normal __attribute__((aligned(16))) grid_vertices[GRID_COLUMNS*GRID_ROWS];
    unsigned short __attribute__((aligned(16))) grid_indices[(GRID_COLUMNS-1)*(GRID_ROWS-1)*6];
    Here's where our Vertex comes in. We set up our vertexes around the grid.

    Code:
    #define TORUS_SLICES 48 // numc
    #define TORUS_ROWS 48 // numt
    #define TORUS_RADIUS 1.0f
    #define TORUS_THICKNESS 0.5f
    
    Vertex_Normal __attribute__((aligned(16))) torus_vertices[TORUS_SLICES*TORUS_ROWS];
    unsigned short __attribute__((aligned(16))) torus_indices[TORUS_SLICES*TORUS_ROWS*6];
    Now we're setting up a Torus!

    Code:
    #define LIGHT_DISTANCE 3.0f
    
    
    int SetupCallbacks();
    
    void genGrid( unsigned rows, unsigned columns, float size,
        Vertex_Normal* dstVertices, unsigned short* dstIndices );
    void genTorus( unsigned slices, unsigned rows, float radius, float thickness,
        Vertex_Normal* dstVertices, unsigned short* dstIndices );
    Since we're dealing with shadows, we need to set up our lighting. We also set up our callbacks and generate our grid and torus.

    Code:
    #define BUF_WIDTH (512)
    #define SCR_WIDTH (480)
    #define SCR_HEIGHT (272)
    #define PIXEL_SIZE (4) /* change this if you change to another screenmode */
    #define FRAME_SIZE (BUF_WIDTH * SCR_HEIGHT * PIXEL_SIZE)
    #define ZBUF_SIZE (BUF_WIDTH SCR_HEIGHT * 2) /* zbuffer seems to be 16-bit? */
    Now it gets fun! We define our buffer as 512 to be standardised. Next, we set up our screen resolution. Unlike the GBA, we can set this to be practically whatever we want! For the sake of this article, I have set it to be 480x272 - the normal PSP screen resolution. The PSP does, however, support up to 720x576 and this can be displayed at full resolution via component cables. We set the pixel size as 4, and this is simply defining the standard screen mode. Framesize is calculated based on this and you don't need to know too much about that. The Z-buffer is a depth buffer, and it is 16-bit here.

    Code:
    typedef struct Geometry
    {
        ScePspFMatrix4 world;
        unsigned int count;
        unsigned short* indices;
        Vertex_Normal* vertices;
        unsigned int color;
    } Geometry;
    
    void drawGeometry( Geometry* geom )
    {
        sceGuSetMatrix(GU_MODEL,&geom->world);
    
        sceGuColor(geom->color);
        sceGuDrawArray(GU_TRIANGLES,GU_NORMAL_32BITF|GU_VERTEX_32BITF|GU_INDEX_16BIT|GU_TRANSFORM_3D,geom->count,geom->indices,geom->vertices);
    }
    Now we're defining our geometry. We set up our indices and vertices and create one or two unsigned ints. We also draw our geometry in this part; we use sceGuSetMatrix to set up this model, and then we draw this collection of triangles as our geometric world. It should be fairly self explanatory, and if you don't understand parts don't worry; these are publicly documented commands from the PSP libraries.

    Code:
    void drawShadowCaster( Geometry* geom )
    {
        sceGuSetMatrix(GU_MODEL,&geom->world);
    
        sceGuColor(0x00000000);
        sceGuDrawArray(GU_TRIANGLES,GU_NORMAL_32BITF|GU_VERTEX_32BITF|GU_INDEX_16BIT|GU_TRANSFORM_3D,geom->count,geom->indices,geom->vertices);
    }
    
    void drawShadowReceiver( Geometry* geom, ScePspFMatrix4 shadowProjMatrix )
    {
        sceGuSetMatrix(GU_MODEL,&geom->world);
    
        // multiply shadowmap projection texture by geometry world matrix
        // since geometry coords are in object space
    
        gumMultMatrix(&shadowProjMatrix, &shadowProjMatrix, &geom->world );
        sceGuSetMatrix(GU_TEXTURE,&shadowProjMatrix);
    
        sceGuColor(geom->color);
        sceGuDrawArray(GU_TRIANGLES,GU_NORMAL_32BITF|GU_VERTEX_32BITF|GU_INDEX_16BIT|GU_TRANSFORM_3D,geom->count,geom->indices,geom->vertices);
    }
    And now we get our shadows going as promised! ShadowCaster sets up what geometry will cast a shadow, and shadowReceiver is how the shadow is drawn based on geometry.

    Code:
    int main(int argc, char* argv[])
    {
        SetupCallbacks();
    
        // generate geometry
    
        genGrid( GRID_ROWS, GRID_COLUMNS, GRID_SIZE, grid_vertices, grid_indices );       
        genTorus( TORUS_ROWS, TORUS_SLICES, TORUS_RADIUS, TORUS_THICKNESS, torus_vertices, torus_indices );       
    
        // flush cache so that no stray data remains
    
        sceKernelDcacheWritebackAll();
    
        // setup VRAM buffers
    
        void* frameBuffer = (void*)0;
        const void* doubleBuffer = (void*)0x44000;
        const void* renderTarget = (void*)0x88000;
        const void* depthBuffer = (void*)0x110000;
    Now, our all-important main loop where all of our code comes together! We set up our usual callbacks, then we generate our grid and torus. Then, we do something important; we flush the kernel cache to make sure that leftover memory is cleaned up. We also do some standard setting-up for the VRAM buffers.

    Code:
        // setup GU
    
        sceGuInit();
    
        sceGuStart(GU_DIRECT,list);
        sceGuDrawBuffer(GU_PSM_4444,frameBuffer,BUF_WIDTH);
        sceGuDispBuffer(SCR_WIDTH,SCR_HEIGHT,(void*)doubleBuffer,BUF_WIDTH);
        sceGuDepthBuffer((void*)depthBuffer,BUF_WIDTH);
        sceGuOffset(2048 - (SCR_WIDTH/2),2048 - (SCR_HEIGHT/2));
        sceGuViewport(2048,2048,SCR_WIDTH,SCR_HEIGHT);
        sceGuDepthRange(0xc350,0x2710);
        sceGuScissor(0,0,SCR_WIDTH,SCR_HEIGHT);
        sceGuEnable(GU_SCISSOR_TEST);
        sceGuDepthFunc(GU_GEQUAL);
        sceGuEnable(GU_DEPTH_TEST);
        sceGuFrontFace(GU_CW);
        sceGuShadeModel(GU_SMOOTH);
        sceGuEnable(GU_CULL_FACE);
        sceGuEnable(GU_TEXTURE_2D);
        sceGuEnable(GU_DITHER);
        sceGuFinish();
        sceGuSync(0,0);
    
        sceDisplayWaitVblankStart();
        sceGuDisplay(GU_TRUE);
    The Gu setup part is great, because you're simply enabling whatever you'd like and tuning the GPU as you want. We initialise and start the GPU first, naturally. Then we tell it to draw our buffer "frameBuffer", with the PSM_4444 format. Then it displays our buffer at 480x272, at PSP size, and sets up double buffering. This helps us to synchronise V-Blank, similar to double-buffered V-sync. We then set up our depth buffer (Z-buffer) and an offset or two, as well as the viewport. You don't need to know the following few, but you can enable depth testing, shading, dithering and more as should be self explanatory. Nice and easy, and makes you feel like a developer!

    Code:
        // setup matrices
    
        ScePspFMatrix4 identity;
        ScePspFMatrix4 projection;
        ScePspFMatrix4 view;
    
        gumLoadIdentity(&identity);
    
        gumLoadIdentity(&projection);
        gumPerspective(&projection,75.0f,16.0f/9.0f,0.5f,1000.0f);
    
        {
            ScePspFVector3 pos = {0,0,-5.0f};
    
            gumLoadIdentity(&view);
            gumTranslate(&view,&pos);
        }
    
        ScePspFMatrix4 textureProjScaleTrans;
        gumLoadIdentity(&textureProjScaleTrans);
        textureProjScaleTrans.x.x = 0.5;
        textureProjScaleTrans.y.y = -0.5;
        textureProjScaleTrans.w.x = 0.5;
        textureProjScaleTrans.w.y = 0.5;
    
        ScePspFMatrix4 lightProjection;
        ScePspFMatrix4 lightProjectionInf;
        ScePspFMatrix4 lightView;
        ScePspFMatrix4 lightMatrix;
    
        gumLoadIdentity(&lightProjection);
        gumPerspective(&lightProjection,75.0f,1.0f,0.1f,1000.0f);
        gumLoadIdentity(&lightProjectionInf);
        gumPerspective(&lightProjectionInf,75.0f,1.0f,0.0f,1000.0f);
    
        gumLoadIdentity(&lightView);
        gumLoadIdentity(&lightMatrix);
    The next bit is messy, as we're setting up some manual matrices about light projection. It's probably not necessary to understand all of this.

    Code:
        // define shadowmap
    
        Texture shadowmap = {
            GU_PSM_4444,
            0, 128, 128, 128,
            sceGeEdramGetAddr() + (int)renderTarget
        };
    And a nice easy part again. We're defining the resolution of our shadows, and changing the resolution and format of shadowmaps can be a nice optimisation to any budding game devs!

    Code:
        // define geometry
    
        Geometry torus = {
            identity,
            sizeof(torus_indices)/sizeof(unsigned short),
            torus_indices,
            torus_vertices,
            0xffffff
        };
        Geometry grid = {
            identity,
            sizeof(grid_indices)/sizeof(unsigned short),
            grid_indices,
            grid_vertices,
            0xff7777
        };
    A little more defining for our geometry and grid.

    Code:
        // run sample
    
        int val = 0;
    
        for(;;)
        {
            // update matrices
    
            // grid
            {
                ScePspFVector3 pos = {0,-1.5f,0};
    
                gumLoadIdentity(&grid.world);
                gumTranslate(&grid.world,&pos);
            }
    
            // torus
            {
                ScePspFVector3 pos = {0,0.5f,0.0f};
                ScePspFVector3 rot = {val * 0.79f * (GU_PI/180.0f), val * 0.98f * (GU_PI/180.0f), val * 1.32f * (GU_PI/180.0f)};
    
                gumLoadIdentity(&torus.world);
                gumTranslate(&torus.world,&pos);
                gumRotateXYZ(&torus.world,&rot);
            }
    And now we're rendering our world with the torus. A bit messy for sure, with all of the numbers involved, but if you want to find out more about how these specific commands work you can find online resources such as the archived pspdev.org and ps2dev.org.

    Code:
            // orbiting light
            {
                ScePspFVector3 lightLookAt = { torus.world.w.x, torus.world.w.y, torus.world.w.z };
                ScePspFVector3 rot1 = {0,val * 0.79f * (GU_PI/180.0f),0};
                ScePspFVector3 rot2 = {-(GU_PI/180.0f)*60.0f,0,0};
                ScePspFVector3 pos = {0,0,LIGHT_DISTANCE};
    
                gumLoadIdentity(&lightMatrix);
                gumTranslate(&lightMatrix,&lightLookAt);
                gumRotateXYZ(&lightMatrix,&rot1);
                gumRotateXYZ(&lightMatrix,&rot2);
                gumTranslate(&lightMatrix,&pos);
            }
    
            gumFastInverse(&lightView,&lightMatrix);
    Over halfway there! Here's our light source for our shadow casting. The light orbits so as to cast a different and dynamic shadow.

    Code:
            // render to shadow map
    
            {
                sceGuStart(GU_DIRECT,list);
    
                // set offscreen texture as a render target
    
                sceGuDrawBufferList(GU_PSM_4444,(void*)renderTarget,shadowmap.stride);
    
                // setup viewport   
    
                sceGuOffset(2048 - (shadowmap.width/2),2048 - (shadowmap.height/2));
                sceGuViewport(2048,2048,shadowmap.width,shadowmap.height);
    
                // clear screen
    
                sceGuClearColor(0xffffffff); // 0xFFFFFFFF = blank, dummy or empty data
                sceGuClearDepth(0);
                sceGuClear(GU_COLOR_BUFFER_BIT|GU_DEPTH_BUFFER_BIT);
    
                // setup view/projection from light
    
                sceGuSetMatrix(GU_PROJECTION,&lightProjection);
                sceGuSetMatrix(GU_VIEW,&lightView);
    
                // shadow casters are drawn in black
                // disable lighting and texturing
    
                sceGuDisable(GU_LIGHTING);
                sceGuDisable(GU_TEXTURE_2D);
    
                // draw torus to shadow map
    
                drawShadowCaster( &torus );
    
                sceGuFinish();
                sceGuSync(0,0);
            }
    This is where things get fun, this is the shadow rendering. We start the GPU up, then we set an off-screen texture as a render target. We set up the viewport and then clear the contents of the buffered screen. The rest should be fairly self explanatory; while we draw shadows, we disable lighting and 2D textures so that the rendering can focus on the shadows. Then we finish and sync!

    Code:
            // render to frame buffer
    
            {
                sceGuStart(GU_DIRECT,list);
    
                // set frame buffer
    
                sceGuDrawBufferList(GU_PSM_4444,(void*)frameBuffer,BUF_WIDTH);
    
                // setup viewport
    
                sceGuOffset(2048 - (SCR_WIDTH/2),2048 - (SCR_HEIGHT/2));
                sceGuViewport(2048,2048,SCR_WIDTH,SCR_HEIGHT);
                
                // clear screen
    
                sceGuClearColor(0xff554433);
                sceGuClearDepth(0);
                sceGuClear(GU_COLOR_BUFFER_BIT|GU_DEPTH_BUFFER_BIT);
    
                // setup view/projection from camera
    
                sceGuSetMatrix(GU_PROJECTION,&projection);
                sceGuSetMatrix(GU_VIEW,&view);
                sceGuSetMatrix(GU_MODEL,&identity);
    
                // setup a light
                ScePspFVector3 lightPos = { lightMatrix.w.x, lightMatrix.w.y, lightMatrix.w.z };
                ScePspFVector3 lightDir = { lightMatrix.z.x, lightMatrix.z.y, lightMatrix.z.z };
    
                sceGuLight(0,GU_SPOTLIGHT,GU_DIFFUSE,&lightPos);
                sceGuLightSpot(0,&lightDir, 5.0, 0.6);
                sceGuLightColor(0,GU_DIFFUSE,0x00ff4040);
                sceGuLightAtt(0,1.0f,0.0f,0.0f);
                sceGuAmbient(0x00202020);
                sceGuEnable(GU_LIGHTING);
                sceGuEnable(GU_LIGHT0);
    
                // draw torus
    
                drawGeometry( &torus );
    
                // setup texture projection
    
                sceGuTexMapMode( GU_TEXTURE_MATRIX, 0, 0 );
                sceGuTexProjMapMode( GU_POSITION );
    
                // set shadowmap as a texture
    
                sceGuTexMode(shadowmap.format,0,0,0);
                sceGuTexImage(shadowmap.mipmap,shadowmap.width,shadowmap.height,shadowmap.stride,shadowmap.data);
                sceGuTexFunc(GU_TFX_MODULATE,GU_TCC_RGB);
                sceGuTexFilter(GU_LINEAR,GU_LINEAR);
                sceGuTexWrap(GU_CLAMP,GU_CLAMP);
                sceGuEnable(GU_TEXTURE_2D);
    
                // calculate texture projection matrix for shadowmap
     
                ScePspFMatrix4 shadowProj;
                gumMultMatrix(&shadowProj, &lightProjectionInf, &lightView);
                gumMultMatrix(&shadowProj, &textureProjScaleTrans, &shadowProj);
    
                // draw grid receiving shadow
    
                drawShadowReceiver( &grid, shadowProj );
    
                sceGuFinish();
                sceGuSync(0,0);
            }
    
            sceDisplayWaitVblankStart();
            frameBuffer = sceGuSwapBuffers();
    
            val++;
        }
    
        sceGuTerm();
    
        sceKernelExitGame();
        return 0;
    }
    
    Wow.. that's a lot to go through. Okay, so basically, now we're rendering our hard work into our frame buffer so it can be displayed on our PSP screen. A lot of this should actually be self explanatory, but a few things I want to explain: for the sceGuTexFilter command, you can use nearest or linear filtering depending on preference. However, you cannot use anisotropic filtering like modern graphics programs in the same way. Additionally, the reason why we swap framebuffers at the end is a continuation of our double buffering plan so we can avoid screen tearing and make sure the display is consistent.

    Code:
    /* Exit callback */
    int exit_callback(int arg1, int arg2, void *common)
    {
        sceKernelExitGame();
        return 0;
    }
    
    /* Callback thread */
    int CallbackThread(SceSize args, void *argp)
    {
        int cbid;
    
        cbid = sceKernelCreateCallback("Exit Callback", exit_callback, NULL);
        sceKernelRegisterExitCallback(cbid);
    
        sceKernelSleepThreadCB();
    
        return 0;
    }
    
    /* Sets up the callback thread and returns its thread id */
    int SetupCallbacks(void)
    {
        int thid = 0;
    
        thid = sceKernelCreateThread("update_thread", CallbackThread, 0x11, 0xFA0, 0, 0);
        if(thid >= 0)
        {
            sceKernelStartThread(thid, 0, 0);
        }
    
        return thid;
    }
    Callbacks! Now we can set up a few of these for useful, convenient code.

    Code:
    /* usefull geometry functions */
    void genGrid( unsigned rows, unsigned columns, float size, Vertex_Normal* dstVertices, unsigned short* dstIndices )
    {
        unsigned int i,j;
    
        // generate grid (TODO: tri-strips)
        for (j = 0; j < rows; ++j)
        {
            for (i = 0; i < columns; ++i)
            {
                Vertex_Normal* curr = &dstVertices[i+j*columns];
    
                curr->nx = 0.0f;
                curr->ny = 1.0f;
                curr->nz = 0.0f;
    
                curr->x = ((i * (1.0f/((float)columns)))-0.5f) * size;
                curr->y = 0;
                curr->z = ((j * (1.0f/((float)columns)))-0.5f) * size;
            }
        }
    
        for (j = 0; j < rows-1; ++j)
        {
            for (i = 0; i < columns-1; ++i)
            {
                unsigned short* curr = &dstIndices[(i+(j*(columns-1)))*6];
    
                *curr++ = i + j * columns;
                *curr++ = (i+1) + j * columns;
                *curr++ = i + (j+1) * columns;
    
                *curr++ = (i+1) + j * columns;
                *curr++ = (i+1) + (j+1) * columns;
                *curr++ = i + (j + 1) * columns;
            }
        }
    }
    Here's a nice long function to generate our grid geometry. Good thing we have it as a callback!

    Code:
    void genTorus( unsigned slices, unsigned rows, float radius, float thickness, Vertex_Normal* dstVertices, unsigned short* dstIndices )
    {
        unsigned int i,j;
    
        // generate torus (TODO: tri-strips)
        for (j = 0; j < slices; ++j)
        {
            for (i = 0; i < rows; ++i)
            {
                struct Vertex_Normal* curr = &dstVertices[i+j*rows];
                float s = i + 0.5f;
                float t = j;
                float cs,ct,ss,st;
    
                cs = cosf(s * (2*GU_PI)/slices);
                ct = cosf(t * (2*GU_PI)/rows);
                ss = sinf(s * (2*GU_PI)/slices);
                st = sinf(t * (2*GU_PI)/rows);
    
                curr->nx = cs * ct;
                curr->ny = cs * st;
                curr->nz = ss;
    
                curr->x = (radius + thickness * cs) * ct;
                curr->y = (radius + thickness * cs) * st;
                curr->z = thickness * ss;
            }
        }
    
        for (j = 0; j < slices; ++j)
        {
            for (i = 0; i < rows; ++i)
            {
                unsigned short* curr = &dstIndices[(i+(j*rows))*6];
                unsigned int i1 = (i+1)%rows, j1 = (j+1)%slices;
    
                *curr++ = i + j * rows;
                *curr++ = i1 + j * rows;
                *curr++ = i + j1 * rows;
    
                *curr++ = i1 + j * rows;
                *curr++ = i1 + j1 * rows;
                *curr++ = i + j1 * rows;
            }
        }
    }
    And the final piece to this code, our torus callback. Now our code is complete; a nice function to draw our torus, our grid geometry, and we also have an orbiting light source and dynamic shadows! All in about 500 lines of code! This is what the final code should look like, courtesy of the PSP SDK:

    Code

    Wow, that's almost exhausting just to look at. But we got through it - our code is finished up and ready! Now the last thing to do is to set up a makefile so we can run our code. Let's save this code as shadowprojection.c and use this makefile:

    Code:
    TARGET = shadowprojection
    OBJS = shadowprojection.o
    
    INCDIR =
    CFLAGS = -G0 -Wall
    CXXFLAGS = $(CFLAGS) -fno-exceptions -fno-rtti
    ASFLAGS = $(CFLAGS)
    
    LIBDIR =
    LDFLAGS =
    LIBS = -lpspgum -lpspgu -lm
    
    EXTRA_TARGETS = EBOOT.PBP
    PSP_EBOOT_TITLE = Shadow Projection Sample
    
    PSPSDK=$(shell psp-config --pspsdk-path)
    include $(PSPSDK)/lib/build.mak
    So we haven't set many flags in particular, especially no risky compiler flags like -O2 because we don't need the speedup - our PSP can power through this by itself! This will compile it into a standard EBOOT.PBP with the internal title "Shadow Projection Sample". Once this is done, you can connect your PSP to your computer with the USB cable that came with your PSP (Wow, nice move Sony..) and copy it across so you can run it! This will also run on PPSSPP or any PSP emulator that can render 3D graphics well.

    Congratulations! If you made it this far, you've done a great job. The code above was more advanced than most people would expect from a second lesson, however I really wanted to show you how a real 3D renderer is made. Thankfully, you often won't have to do this sort of programming, because game engines exist and you can simply build your games around those. However, it's a fascinating insight into what goes on behind the scenes. Remember to like and tell me your thoughts below if you enjoyed this, or if you didn't and want to tell me how to improve it. You can also follow me to be notified of future posts like this. Until next time, though, thanks for reading!



    Footnote: The source to this code can be found on the PSP SDK github, in the samples section, under the "gu" directory. Someone rightly pointed out that I should credit the original writings/code, so here it is! I'd also like to apologise, since in the previous articles - especially towards the end - I got tired and lazy and I started copying more and more liberally from pre-existing articles and resources instead of writing things in my own words. To rectify this, I have written this entire article without any assistance except the PSP SDK sample code! :D
    https://github.com/pspdev/pspsdk/tree/master/src/samples/gu/shadowprojection
    TheMrIron2 The PSP was Sony's first and biggest foray into the handheld console scene. While it was remembered for being a powerful system with a plethora of good games of console quality, it was also hacked rapidly and the homebrew scene grew at an absurd rate. The PSP scene reached incredible highs and remains one of the consoles with the biggest and proudest homebrew catalogs, being hackable in minutes with a USB cable and with software from a near-fullspeed N64 emulator, all of the standard homebrew goodies and some more interesting things such as a universal remote program, a screen mirroring program so you can play PC games on PSP and even a broken, simple port of nullDC - the Dreamcast emulator.

    So as such a massively famous system in the hacking scene, I think it would be good if I introduced you to the PSP as a coding companion. It will certainly serve you well, because it's a perfect programming system; unlike the GBA, it supports easier languages like Python more easily while also allowing for C code to be easily and effectively compiled. It doesn't give you all the overhead you would like, but it gives you enough for 3D experiments. It allows you to go down to the metal as far as C will allow you, and has its own set of nice APIs, but they are also largely avoidable although you won't quite have the same bare-metal experience as PSP.

    Hardware
    Without further ado, a reminder of the PSP's specs so you know what you're working with. Unlike the GBA, the PSP has a GPU. This allows us to enter a whole new world of programming; once you get comfortable, you can try your hand at accelerating your programs by using the GPU and doing hardware 3D rendering for maximum efficiency, but thankfully the PSP has enough horsepower to brute-force some 3D rendering without the need for a GPU. The PSP's CPU is a MIPS based CPU, similar to the PS2's, and runs at any desired clockspeed from 1 to 333MHz. Generally, these are preset to 222, 266, 300 and 333MHz. The GPU and memory bus are affected by the chosen CPU speed, however, as they operate at exactly half the speed at all times. So at 333MHz, the bus and GPU are operating at 166MHz. The PSP also has a generous 32MB of RAM available, and 4MB of VRAM. This is the same as the PS2, in fact. However, PSP models after the original PSP-1000 have 64MB of RAM. So while commercial games could only utilise 32MB, homebrew games can access practically all of this memory, though some is still reserved for system tasks.

    [​IMG]
    So we have lots of generous hardware overhead if we ever want to venture into 3D; that's great! But for now, we'll be sticking with simple demos to ease us into this.

    Setting Up

    Setting everything up is a little less straightforward. The PSP SDK on GitHub is still receiving regular updates, however I have heard that it's currently not as stable as it could be - for example, a friend of mine was baffled when trying to compile DaedalusX64 for PSP for a solid week until he realised the current toolchain on GitHub was the problem. So I recommend checking this version out instead as the last stable version to be safe.

    To use this on Windows, I recommend setting up a Linux virtual machine or Cygwin because currently the resources for Windows are scarce and just inferior to Linux's toolset, unfortunately. So it's your best bet. There is an excellent tutorial on setting this up with the PSP toolchain on Wikibooks here, and I strongly advise reading those instructions to set it up. If you are already running Linux, simply download the above toolchain.

    Makefile

    A makefile is simply a file we use to tell the compiler some information about our program and allowing it to build the program on command. Here is a template for a makefile:


    Code:
    TARGET = my_program
    OBJS   = main.o myLibrary.o
    
    INCDIR   =
    CFLAGS   = -G0 -Wall -O2
    CXXFLAGS = $(CFLAGS) -fno-exceptions -fno-rtti
    ASFLAGS  = $(CFLAGS)
    
    LIBDIR  =
    LDFLAGS =
    LIBS    = -lm
    
    BUILD_PRX = 1
    
    EXTRA_TARGETS   = EBOOT.PBP
    PSP_EBOOT_TITLE = My Program
    PSP_EBOOT_ICON= ICON0.png
    PSP_EBOOT_PIC1= PIC1.png
    PSP_EBOOT_SND0= SND0.at3
    
    PSPSDK=$(shell psp-config --pspsdk-path)
    include $(PSPSDK)/lib/build.mak
    It's not too bad, you don't actually have to touch most of this at all. You can change names, such as "My Program", and if you're really looking for a performance gain you can change up flags such as changing -O2 to -O3 for a more aggressive compiler to optimise your program. However, we won't need to change much at all. The "target" tells the compiler the name of our source file, and "objs" to create a "main.o" and "myLibrary.o" as we are using "main.c" as the file where we program the code and “myLibrary.c” as a helper class.

    Next we tell it to build a PRX binary rather than a static ELF binary. Static ELFs are generally depreciated, and only PRX-based homebrew can be signed to run on any PSP firmware. Afterwards we also create an "EBOOT.PBP" which is the executable, like an EXE on Windows. Then you give it the title and have an option of an icon (144x80), a background picture (480x272), and a PSP sound file (at3). If you don't want one of these, then simply delete the lines.

    Callbacks

    Callbacks are common files that will be used frequently in our programs and will make our lives easier in future so we don't have to re-write the same code. Create a folder called "common" in your programming directory so we can store some callbacks here.

    Let's create a callback called "callback.h". This is a header file, and it will define and declare a few things so our program can run.

    Code:
    #ifndef COMMON_CALLBACK_H
    #define COMMON_CALLBACK_H
    
    int isRunning();
    int setupExitCallback();
    
    #endif
    The 'ifndef' is used to make sure that this file only gets imported once, otherwise an error will occur. Then it should be pretty self-explanatory; we will have two functions which we will use: "isRunning()" to check if the user has requested to quit the program, and "setupCallbacks()" which will setup all the necessary things for your program to run on the PSP.

    That's all for "callback.h". You can save and close that now. Now that we have the header definitions we can also create the source file: name it "callback.c". We'll start by including the file "pspkernel.h" which gives us access to several kernel methods.

    Code:
    #include <pspkernel.h>
    Next, we create a boolean. A boolean is a simple true or false statement. Executing the method "isRunning()" will tell us whether a request to exit the application was created by the user. We will use this function in our program so that we can clean up any leftover memory and exit the program without crashing it.

    Code:
    static int exitRequest  = 0;
    
    int isRunning()
    {
        return !exitRequest;
    }
    Simple so far, as you can see. Well, the next part is a bit more complicated. But don't worry, you don't need to understand it. This basically creates a new thread, which creates a callback for exiting our program. The callback will then make "exitRequest" = true if the user presses the "home" and "exit" button on their PSP, and I'm sure that's self explanatory. Here's the complete code for "callback.c":

    Code:
    [SPOILER]#include <pspkernel.h>
    
    static int exitRequest = 0;
    
    int isRunning()
    {
        return !exitRequest;
    }
    
    int exitCallback(int arg1, int arg2, void *common)
    {
        exitRequest = 1;
        return 0;
    }
    
    int callbackThread(SceSize args, void *argp)
    {
        int callbackID;
    
        callbackID = sceKernelCreateCallback("Exit Callback", exitCallback, NULL);
        sceKernelRegisterExitCallback(callbackID);
    
        sceKernelSleepThreadCB();
    
        return 0;
    }
    
    int setupExitCallback()
    {
        int threadID = 0;
    
        threadID = sceKernelCreateThread("Callback Update Thread", callbackThread, 0x11, 0xFA0, THREAD_ATTR_USER, 0);
        
        if(threadID >= 0)
        {
            sceKernelStartThread(threadID, 0, 0);
        }
    
        return threadID;
    }
    [/SPOILER]

    Again, understanding it isn't too important right now, since this won't change and it will simply be included in our programs in future for easy exiting.

    Hello World

    We have already gone through a lot, and we have set down some foundations for our future PSP projects! That's actually a lot of the hard part. The next part is simple; we will make a simple template for a program and throw a "Hello World" into it.

    We'll start by including “pspkernel.h” which will allow us to exit the application, "pspdebug.h" so that we can get a simple debug screen started, "pspdisplay.h" for "sceDisplayWaitVblankStart" function - so we can synchronise the screen timing with V-Blanks, as we covered in the GBA programming post - and "callback.h" of course so that the user can quit at any time by pressing 'home' and then 'exit'.

    Code:
    #include <pspkernel.h>
    #include <pspdebug.h>
    #include <pspdisplay.h>
    
    #include "../common/callback.h"
    Next part is also quite simple; we simply give the PSP some details about our program.

    Code:
    #define VERS    1 //Talk about this
    #define REVS    0
    
    PSP_MODULE_INFO("Hello World", PSP_MODULE_USER, VERS, REVS);
    PSP_MAIN_THREAD_ATTR(PSP_THREAD_ATTR_USER);
    PSP_HEAP_SIZE_MAX();
    
    #define printf pspDebugScreenPrintf
    In PSP_MODULE_INFO we tell the PSP the name of our program, as well as any attributes and the version of our program. We don't need to get too deep into this part. I have also defined "printf" as "pspDebugScreenPrintf". "printf" is a simple function in C to print text to the screen, and this is just simplifying the longer version of this code from the PSP APIs. So that way, when the program is being compiled, "printf" will simply act as an alias for "pspDebugScreenPrintf". Simple! That's definitions for you. Next part:

    Code:
    int main()
    {       
        pspDebugScreenInit();
        setupExitCallback();
    
        while(isRunning())
        {
            pspDebugScreenSetXY(0, 0);
            printf("Hello World!");
            sceDisplayWaitVblankStart();
        }
    
        sceKernelExitGame();   
        return 0;
    }
    This is the main "loop" of our program, as the "int main()" suggests. This is the heart of the code in a C program. First, we will initialize the debug screen, and setup our callbacks. Then inside the loop we place the position to write to at (0,0) so that printing doesn't go to the next line and print out our message. Then to prevent screen tearing (ie. ensuring that the screen delivers frames with perfect consistency to avoid a "tearing" effect) we call “sceDisplayWaitVblankStart”. Once the user quits and the loop is broken (remember we are using the "isRunning()" method), we do a last call to "sceKernelExitGame()" which will exit our application and return zero to close the program.

    Now if you are using an IDE that can also compile your PSP programs, you can hit compile and put the "EBOOT.PBP" in a folder on your PSP, and then run it. If on the other hand you choose to do things manually, then we will have to create the Makefile before compiling. I'll be making the Makefile for the sake of covering it in this tutorial. Remember we talked about one of these earlier? Well, let's flesh it out a bit!

    Code:
    TARGET        = hello_world
    OBJS        = main.o ../common/callback.o
    
    INCDIR        =
    CFLAGS        = -G0 -Wall -O2
    CXXFLAGS    = $(CFLAGS) -fno-exceptions -fno-rtti
    ASFLAGS    = $(CFLAGS)
    
    LIBDIR        =
    LDFLAGS    =
    LIBS        = -lm
    
    BUILD_PRX = 1
    
    EXTRA_TARGETS    = EBOOT.PBP
    PSP_EBOOT_TITLE= Hello World
    
    PSPSDK    = $(shell psp-config --pspsdk-path)
    include $(PSPSDK)/lib/build.mak
    So we've added the name of our program and the path we want to compile it to in our Makefile. Now we simply call the "make" command on our toolchain and we have compiled our first program! This EBOOT file will boot on a real PSP. Place it in PSP:/PSP/GAME/helloworld/ and it will appear in the games section of your PSP! (You can also boot it using an emulator.)

    That was cool! It's great to see our work pay off like this in such a nice way. As you can see, the PSP is quite a standardised platform and things have to be named and set up. But it's worth it, because the final result looks proudly official and complete.

    Button Input

    So in the second tutorial we will be learning how to work with button inputs. Let's start with our headers again. It's the same deal as before, except this time we're including "pspctrl.h" so we can accept button inputs.

    Code:
    #include <pspkernel.h>
    #include <pspdebug.h>
    #include <pspctrl.h>
    
    The code here is the same, aside from changing the name and version number. 
    
    
    [CODE]#define VERS    2
    #define REVS    1
    
    PSP_MODULE_INFO("Button Input", PSP_MODULE_USER, VERS, REVS);
    PSP_MAIN_THREAD_ATTR(PSP_THREAD_ATTR_USER);
    PSP_HEAP_SIZE_MAX();
    
    #define printf pspDebugScreenPrintf
    Now for the main part of our program again.

    Code:
    int main()
    {       
        pspDebugScreenInit();
        setupExitCallback();
    
        int running = isRunning();
    
        SceCtrlData buttonInput;
    
        sceCtrlSetSamplingCycle(0);
        sceCtrlSetSamplingMode(PSP_CTRL_MODE_ANALOG);
    In our main loop we do the usual, but this time create a variable called running so that we can update it if the exit callback occurs or if the user presses the button 'start'. Then we create the button input object and set the sampling rate to 0, and allow analog mode so that we can read where the analog pad is.

    Code:
        while(running)
        {
            running = isRunning();
    
            pspDebugScreenSetXY(0, 0);
            printf("Analog X = %d ", buttonInput.Lx);
            printf("Analog Y = %d \n", buttonInput.Ly);
    Now we start a while loop and update our variable from the exit callback telling us if we should quit or not. We reset the position to print, and print out the analog pad position. Printing the variable is done by passing it in the second parameter and using '%d' as a placeholder for it.

    One thing to remember, when you get the position of the analog stick using the 'Lx' and 'Ly' properties, you get a value from 0 to 255. You can subtract 128 from that so that the center would be 0. However, there is a chance that the program will pick up an analog position because of the stick's "dead zone". This is when the analog stick is ever so slightly off or imperfectly positioned, and our program is picking it up.

    Code:
            sceCtrlPeekBufferPositive(&buttonInput, 1);
    
            if(buttonInput.Buttons != 0)
            {
                if(buttonInput.Buttons & PSP_CTRL_START){
                                        printf("Start");
                                        running = 0;
                } 
    Okay, so this is the idea for our program. As you can infer from the code above, if we are looking for button inputs and the start button is pressed, we print "Start". We also set running to 0; by changing the state of "running" from true to false (or 1 to 0 in this case), we can stop our program smoothly. Let's continue with the other buttons.

    Code:
    if(buttonInput.Buttons & PSP_CTRL_SELECT)    printf("Select");
    
    if(buttonInput.Buttons & PSP_CTRL_UP)        printf("Up");
                if(buttonInput.Buttons & PSP_CTRL_DOWN)        printf("Down");
                if(buttonInput.Buttons & PSP_CTRL_RIGHT)    printf("Right");
                if(buttonInput.Buttons & PSP_CTRL_LEFT)        printf("Left");
    
                if(buttonInput.Buttons & PSP_CTRL_CROSS)    printf("Cross");
                if(buttonInput.Buttons & PSP_CTRL_CIRCLE)    printf("Circle");
                if(buttonInput.Buttons & PSP_CTRL_SQUARE)    printf("Square");
                if(buttonInput.Buttons & PSP_CTRL_TRIANGLE)    printf("Triangle");
        
                if(buttonInput.Buttons & PSP_CTRL_RTRIGGER)    printf("R-Trigger");
                if(buttonInput.Buttons & PSP_CTRL_LTRIGGER)    printf("L-Trigger");
            }
        }
    At the end we close our program. You may notice that we don't check for all buttons, and this is because we need kernel control to check for those buttons. That may be covered in a future tutorial, but for now that's it. Here's a final overview of the code:

    Warning: Spoilers inside!

    And that's it! Let's set up a makefile for this so we can compile it.

    Code:
    TARGET        = ButtonInput
    OBJS        = main.o ../common/callback.o
    
    INCDIR        =
    CFLAGS        = -O2 -G0 -Wall
    CXXFLAGS    = $(CFLAGS) -fno-exceptions -fno-rtti
    ASFLAGS    = $(CFLAGS)
    
    BUILD_PRX    = 1
    
    LIBDIR        = ./
    LIBS        = -lm
    LDFLAGS    =
    
    EXTRA_TARGETS        = EBOOT.PBP
    PSP_EBOOT_TITLE    = ButtonInput
    
    PSPSDK    = $(shell psp-config --pspsdk-path)
    include $(PSPSDK)/lib/build.mak
    Now we run "make" and we should have our second PSP program!

    That wraps up this tutorial. Since a lot of new concepts were introduced, such as makefiles, callbacks and such, I thought we would leave it at this for now. (It's also almost 1AM for me)
    I hope you guys enjoyed this tutorial as well and if people like this, I'll keep making them! Though admittedly my knowledge is pretty rusty and I'm pretty reliant on pre-existing resources at this point. :P
    If you enjoyed it, please like it, tell me what you think in the comments and follow me to get notified of further posts. Thanks for reading!
    TheMrIron2 Hey all, it's been a while since I've done one of these. The previous tech talk articles I've done focused on console hardware, but now I'm going to talk about software. Recently, I've been revisiting Shadow of the Colossus on my PS2, and I've been comparing it to the PS4 remake as well as the PS3 re-master. Before I start comparing anything, though, here's a refresher of just how crazy SOTC was on PS2. I'll refrain from getting too technical.

    To begin, Shadow of the Colossus featured incredibly robust physics - which it needed when the game revolved around nothing but 16 colossal monsters. SOTC featured real time inverse kinematics calcuations - in English, this means that bone structures were recalculated on the fly to match the environment. For example, when the player character - "Wander" - did something as simple as walking up a hill, his feet would adjust with the slope.

    [​IMG]
    The game featured other nice touches, such as an accumulative motion blur which became more intense as the camera panned more. SOTC's "making of" documents reveal that the motion blur is calculated by adjusting the environmental elements of the current frame based on camera velocity from the last frame, and then combining them. As well as that, this helped to make the drops below 30FPS - which were more frequent than you would like in battle, down to 20FPS and occasionally into the teens - much more tolerable and it also lent itself very nicely to the game's intricate animation. It never felt like a distraction from the overall visual presentation, either, and was one of the nicest implementations of motion blur on 6th generation hardware.

    [​IMG]

    Now to get into some other serious stuff. Shadow of the Colossus was one of the only games of its time to simulate HDR rendering. Team ICO knew - as well as any developer who tried on any console hardware at the time - that real time HDR rendering simply wasn't possible on the hardware at the time. So Team ICO tried it anyway. The technique used in Shadow of the Colossus was an evolution of their use of bloom in ICO combined with some intelligently placed manual triggers. In the Shine of Worship at the beginning of the game, the windows and entrance have manually placed HDR triggers. This means that light on the outside is overwhelmingly bright and bloomed and - in turn - fonce you walk over the trigger and look back inside, the interior is extremely dark. It looks convincing, and in other areas of the game the white point is adjusted to give a HDR look as well.

    [​IMG]
    Despite being in a lifeless land where no living soul has been for many years, water plays an integral role in parts of the game. You can swim in the water and as you swim ripples emerge behind you. You can also dive underwater and a nice screen distortion effect comes into play, as well as some more particles because you're, well, underwater. The same applies for the few colossi that inhabit water or lakes; they come to the surface, causing a splash and kicking up particles, and one of them descends back down into the water (see below) causing a few more particles as it immerses itself into the water again. SOTC also has some simple reflections, which is a simple but nice touch. They are not real time, from what I can tell, but they convincingly reflect the surrounding geometry and it looks completely fine in motion.

    [​IMG]

    Additionally, one of the biggest features of this game's monsters are their fur. Pixel shading was not possible on PS2, and even then it was difficult to create convincing looking fur. So what they did, essentially, was they abused the PS2's bandwidth instead. For those who are not aware, the PS2's bandwidth was an astronomical 48GB/s, so in a game like Shadow of the Colossus at 30FPS they had 1.6GB of data to be freely manipulated per frame. It was a clever way of dodging the requirement of shaders and it was truly using the PS2 as the engineers intended. The fur was composed of 6 diffuse layers, with a new texture every 2 layers.

    [​IMG]

    The fact that the fur was not just one texture, but three overlapping textures fused together gave it a remarkable sense of depth. Combined with a parallax effect, the end result was truly a spectacle and was some of the best fur you would see on hardware of its era.

    [​IMG]

    So clearly, this was a technical masterpiece for the PS2 and was even a staple of technical progress for the 6th gen. The game ran at 512x448 to conserve video memory - and this is still double the pixel output of its predecessor ICO, which ran at 512x224 - but with 480p - or "progressive scan" mode - activated, this was increased to a full 640x448. Unfortunately, Shadow of the Colossus's technical wizardry came at a cost; framerate. SOTC struggled to maintain a consistent framerate on PS2 and often fell to 20FPS in taxing boss battles with about 20,000 polygons per boss in many cases. Despite this, the sheer technical achievements on display were enough to overlook this for many and it became a cult classic, releasing at the end of the PS2's life as a flagship in 2005.

    Thankfully, Shadow of the Colossus didn't end there. SOTC was remastered for PS3 in 2011 along with ICO in a bundled remaster release. The game remained faithful to its roots, preserving most of the art and instead simply focusing on increasing resolution and touching up the game for PS3. The game was still quite difficult to run, however. Unlike ICO, which was remastered alongside it and ran at a beautiful 1920x1080p with MLAA anti-aliasing engaged, Shadow of the Colossus targeted 720p with MLAA instead. However, if you force 1080p output from your PS3, SOTC activates another interesting mode; 960x1080, with half the horizontal resolution of a true 1080p target, scaled on the fly with MLAA activated as usual. Some people may be distracted by the rectangular pixels, however, as similar to Mario Odyssey in Switch's portable mode (640x720) the game is trying to double the horizontal lines in game and this also augments the pixel-crawl artifacts which MLAA is a little bit notorious for.

    [​IMG]
    Additionally, the HDR recreation is handled differently on PS3. While the overall technique remains the same, the resolution is changed up a bit in the process. During the rendering of a HDR scene, Shadow of the Colossus on PS2 made a black and white "mask" of the scene and this was sent to the Z-buffer (AKA the "depth" buffer) and downscaled with bilinear filtering to 64x64 to create a blurring effect. The PS3 version simply increases this downscaled resolution before the final Z-buffer result is composited back into the main scene, which gives a cleaner look to the lighting but it also makes the bloom somewhat lessened.

    Shadow of the Colossus on PS3, by and large, improves the experience. At 720p, the game runs very consistently at 30FPS and looks pin-sharp. It's a great way to experience Shadow of the Colossus, even today. However, at 960x1080 and in 3D, things get worse. 960x1080 is a 12.5% increase in bandwidth and fillrate required for the PS3 GPU, and this just tips it over the edge - as I said, SOTC was not an easy game to run especially compared to ICO - and framerate noticeably begins to drop. It's not as bad as the PS2 version, but the game performs noticeably less consistently. 1080p mode still runs at 30FPS a lot of the time, but once you start grabbing onto the colossus and attacking, the framerate is liable to drop to the lower 20FPS region. In 3D, this is even worse, because the scene has to be rendered twice; 3D mode actually runs worse than the PS2 original, hitting as low as 10FPS on occasion and generally struggling to hit 30FPS with any sliver of consistency. Expect boss battles to be in the 20s, dropping into the teens as you start grappling with them. Digital Foundry did an excellent article on the differences and a performance analysis: you can check it out here.

    [​IMG]
    The PS4 version of Shadow of the Colossus is a ground-up remake by Bluepoint, the same team behind the PS3 remaster, released in 2018. Bluepoint kept a sizable amount of the PS2/PS3 code intact, changing parts as necessary such as control schemes, and completely revamped the visual presentation. Fur is now authentically simulated, HDR is now actual HDR, and countless modern rendering techniques were introduced thanks to the visual rework.

    [​IMG]

    Shadow of the Colossus on PS4 does a beautiful job of re-imagining SOTC for the current day. While the original will always be a piece of cult history, and while the remake sometimes fails to recreate the solemn and empty atmosphere of the original as a result of too much additional detail, the PS4 remake is simply exceptional. On a base PS4, SOTC operates at a full 1920x1080 at 30FPS with a sharp temporal anti-aliasing (TAA) technique in play. However, on PS4 Pro is where things get interesting. Due to the paltry amount of bandwidth the PS4 Pro has for a "4K" system, SOTC is simply unable to run at a full 4K; instead, Bluepoint have opted to operate at 1440p, just under half the pixels of a full 4K, at 30FPS. This still looks quite good as a nearly 2x boost over the base system, and temporal anti-aliasing keeps rough edges at bay. That isn't the full story though; the game also gives Pro users the option to play with a "Performance" mode. This drops the resolution to 1080p, but the game targets 60FPS instead for the first time. This is an incredibly fluid and responsive way to play the game, and TAA keeps the game looking respectably sharp even on a 4K display.

    [​IMG]
    So with that all in mind, I think this wraps up the 3-way analysis and comparison of Shadow of the Colossus on 3 respective generations of hardware. If any of you have not played the game before, I recommend playing it - on any platform - and enjoying what is truly an experience and a game unlike any other. Thank you for reading and I hope you learned something from it! Remember to share this article if you found it informative, and share your thoughts in the comment section.
    TheMrIron2 The Wii U, released in 2012, was Nintendo's successor to the Wii. It brought HD graphics and the innovative second screen gamepad idea to Nintendo's table - but how good was it? Since this is a tech talks article, I'm going to start by detailing the hardware before talking about the system's pitfalls. I'm also comparing it to the Wii and Xbox 360 to give an idea of the generational leap.

    [Wii U]
    CPU: 3-core 1.215GHz PowerPC-based CPU
    GPU: Custom AMD "GX2" GPU @ 550MHz - 32MB embedded memory (eDRAM) @ 563.2GB/s
    RAM: 2GB RAM @ 12.8GB/s

    [Xbox 360]
    CPU: 3-core 3.2GHz PowerPC-based CPU
    GPU: Custom ATI (now AMD) "Xenon" GPU @ 500MHz - 10MB embedded memory (eDRAM) @ 256GB/s
    RAM: 512MB RAM @ 22.4GB/s

    [Wii]
    CPU: 1-core 729MHz PowerPC-based CPU
    GPU: Custom ATI "Hollywood" GPU @ 243MHz - 3MB embedded memory (note: 21MB cache available to the GPU outside of embedded memory) @ 2.7GB/s
    RAM: 64MB (88MB total) @ 2.7GB/s

    So it's good to see that Wii U offers a nice jump over the Wii. But the Xbox 360? It's not much further ahead. The CPU isn't as far behind as it seems because the Wii U has a newer CPU, but it's unlikely to beat the X360's CPU on its own - the Wii U also has tremendous support for GPGPU (general purpose GPU) tasks to take load off the CPU as well. So... what's the big deal with this? Well, I'd like to draw your attention to the GPU bandwidth (the GB/s figure). Xbox 360 games often utilised this 10MB of very fast embedded memory by placing all the pixel information in here, because doing so with such fast memory gives you very cheap anti-aliasing and effects. The catch? 1280x720 wouldn't realistically fit in a 10MB buffer. A 720p setup with double buffered v-sync only requires 7MB, but double buffering has a fatal drawback; if a game drops even two frames below its target, it locks to the next lowest framerate - eg. if a game targets 60FPS and on-screen action makes it fall a few frames short, the game locks to 30FPS. This causes jarring slowdown in the middle of gameplay. So games often targeted lower resolutions - common ones include 1024x600 and 880x720 - and cleaned the resulting image up with anti-aliasing.

    So now think about this; the Wii U has over 3 times this memory, at over twice the bandwidth. A regular 1920x1080 framebuffer (where pixel information is stored) with double-buffered v-sync takes up just under 16MB; so in theory, you could use 2x FSAA - full scene anti-aliasing, where the resolution is multiplied then sampled back down - on top of a 1080p game on Wii U and use just under 32MB! Not all of Nintendo's games did this, though, because Nintendo didn't make optimised Wii U tools; they ported over other PowerPC tools without considering this secret weapon. Some developers, such as Shin'en, did use the eDRAM by optimising their tools - and they talked about this themselves when talking about the development of two of their Wii U games, Fast Racing Neo and Nano Assault Neo. Manfred Linzer described the eDRAM as "a simple way to get extra performance without any other optimizations" and it shows that Shin'en knew what they were talking about. They opted for smooth performance by using triple-buffered 720p with their game Fast Racing Neo, because triple buffered 1080p would push it over the memory limit, and Fast Racing Neo remains one of the most beautiful Wii U games to date - pushing very high res textures and techniques not usually seen in Wii U games such as ambient occlusion and HDR.

    [​IMG]
    So why do so many Wii U games look/run the same as, or even worse than, PS3 and Xbox 360 titles? There are a few answers to this. Firstly, most PS3/360 ports released around the time of Wii U launch and were rushed to the shelves. The developers likely had to make do with early tools and poorly ported libraries. Secondly, while the Wii U has a 32MB pocket of very fast memory, the main memory itself is significantly slower - almost half as fast - as the PS3 and 360. This can leave the fast eDRAM bottlenecked by slower memory everywhere else. Finally, some of the Wii U's bandwidth is used to stream a game to the Wii U gamepad as well, potentially incurring further delays.

    Some final notes about the Wii U hardware:
    - The Wii U includes full, hardware backwards compatibility with Wii. But what people don't realise is that Nintendont, a popular Wii homebrew program for playing GameCube games, actually runs GC games without under-clocking to GameCube speeds or using GameCube hardware - it runs the games at Wii speeds like any other Wii game. So you can play GameCube games on your Wii U with Nintendont. What's more, tools exist to mimic GameCube virtual console titles which auto-boot a GameCube game running under Nintendont. Nintendont even supports the Wii U gamepad as a controller for GameCube games!
    - The Wii U has a main "GX2" GPU with a modern feature set based on AMD Radeon R600/700 series GPUs, but it also has a "GX1" GPU - which is the Wii's Hollywood GPU! When running in backwards compatibility mode, the GX1 does all of the work, but the Wii U is displaying everything at 1080p via HDMI - which the GX1 couldn't do, so the GX2 works alongside it.
    - The Wii U GPU supports modern shaders, 4K textures, HDR, ambient occlusion, physically-based rendering, GPGPU calculations and an uncountable amount of other techniques the Wii didn't support. Compared to the PS4, it's a few generations behind (as in, GPU generations) but it certainly exceeds the Xbox 360 and PS3's feature set which predates even AMD as a company (when PS3/360 had their GPUs made, AMD were called ATI).
    - The Wii U gamepad has a built-in microphone, camera and gyroscope, and the screen resolution is 854x480. This is a perfect fit for Wii/GameCube games while having a good enough pixel density as a 6.2" display.

    [​IMG]
    So while this is a tech article, I thought I would also mention why the Wii U failed. Contrary to some beliefs, the Wii U didn't fail because of bad hardware or lack of games. The hardware was good enough for PS3/360 ports as well as some slightly cut down PS4/XB1 games, as well as Nintendo's own exclusives. And many people forget that Wii U launched with a respectable collection of third party games; ZombiU, Assassin's Creed, Call of Duty, 007, Need for Speed, Splinter Cell, Deus Ex and Sonic were all big name franchises which appeared on Wii U at launch or shortly afterwards. The third party games became a problem when third parties realised the Wii U was going to fail, not the other way around. Additionally, Nintendo's own Wii U games are proven smash hits on Switch and the Wii U had a high software attachment rate (ie. games sold:consoles sold ratio). The biggest reason why the Wii U failed was marketing. The Wii U, for millions of people, was seen as an accessory to the Wii. The name never made it clear that it was a new console, and many thought it was an expensive Wii add-on. So many, in fact, that I would argue that calling it even something like "Wii 2" would have saved the console from commercial failure. And for the people who did realise it was a new console, the marketing and advertisements for Wii U were remarkably cringey and did a poor job of enticing users to buy a Wii U - in fact, many Wii U ads have been the subject of memes online. It's easy to see why both of those factors resulted in a very limited potential audience.


    Thanks for reading another one of these long Tech Talks articles! I'm sure by now you're sick of seeing these on the GBAtemp Blogs page, but I've had a lot of spare time and I enjoy writing these up when things are quiet. As usual, give me some feedback in the comments or even a like if you feel like a maniac. It would also be good if you told me what you would be interested in seeing next; a PlayStation 1/2/3/4 tech article? Atari articles? You name it and I'll see what I can do. See you in the comments and the next blog post!
    Illuminaticy likes this.
    TheMrIron2 Hey guys, before I begin I would like to say thanks for all of the responses on the last blog post. There were a lot of interested people and it's nice to see enthusiasm for these, as I value comments and feedback over likes - and people were asking more questions last time which was great.

    So I'm doing this post because I have a lot of time and a lot of seemingly useless knowledge and trivia about hardware built up, which people seem to enjoy reading about, and because my Discord server voted to do a Wii tech talk by a landslide... anyway, I hope you enjoy it!

    The Wii is an interesting system. Despite the claims that it was "two GameCubes" (more on that later) and couldn't do HD graphics like its competitors, the Wii went on to be the best selling system of its generation, selling over 100 million units in total with classic games such as Super Mario Galaxy and The Legend of Zelda: Skyward Sword releasing over the system's life. It proved that Nintendo were the best at luring in a wide audience, and after the success of the DS which also featured an innovative gimmick with comparatively weak hardware it also proved that hardware doesn't matter as long as you're strong in other aspects.


    Hardware:

    In 2005 Nintendo revealed their "Revolution" console, which went on to become the Wii. The idea was that tech specs didn't matter and Revolution put "revolutionary" gameplay and fresh new experiences ahead of photorealism. This was a massive success as it appealed to a very diverse range of players. But did Nintendo nail it? The problem was that while the Wii didn't attempt to deliver photorealism, it was sometimes just not enough to pull its weight. A quick refresher on the specs of the Wii versus its competitor from 2005, the Xbox 360:

    [Wii]
    CPU: 729MHz PowerPC-based CPU
    GPU: Custom ATI "Hollywood" GPU, 243MHz - 24MB embedded memory
    RAM: 64MB (88MB total)
    (Additional info: 243MHz ARM-based "Starlet" processor (for inputs/outputs and security) inside the GPU)

    [Xbox 360]
    CPU: 3.2GHz 3-core PowerPC-based CPU
    GPU: Custom ATI "Xenos" GPU, 500MHz - 10MB VRAM, unified RAM may be used for textures and assets
    RAM: 512MB (unified, shared between GPU and main memory)

    Okay, so it's clear by now that the Wii didn't target tech specs. It was a decision made by Nintendo to go for a more power-efficient, cheaper and smaller system as opposed to a powerhouse like the Xbox 360. But did Nintendo compromise too much? In my opinion, they did, and I'll explain why. Here are the GameCube's specs, from 2001:

    [GameCube]
    CPU: 485MHz PowerPC-based CPU
    GPU: Custom ATI "Flipper" GPU, 162MHz - 3MB VRAM, may tap into 24MB main RAM
    RAM: 24MB "main" RAM + 16MB additional RAM for audio and other misc. tasks

    So when people say "The Wii was just two GameCubes duct-taped together", the reality is even scarier - it's more comparable to an overclocked GameCube, in actuality, or about 1.5x a GameCube on paper. Granted, the hardware had added support in terms of hardware-level instructions, but it is incredible how close the two consoles were. The Wii and GameCube share similar architectures since the Wii was intended to be backwards compatible with the GameCube without the need for emulation, so similar hardware was necessary. However, it became clear that not too long after the Wii's launch, Nintendo had fallen slightly too short.

    [​IMG]
    [​IMG]
    Take Call of Duty 4: Modern Warfare as a prime example, released in 2007 on PS3, Xbox 360 and PC and Modern Warfare: Reflex, a reworked and optimised version of the game for Wii in 2009. Games that were ported over from PS3, Xbox 360 and PC had to have big cutbacks. Games like Modern Warfare Reflex were games that showed clear good intent to recreate the experience from other consoles as well as adding new things to make it a truly Wii experience, but despite the effort it often wasn't enough. Modern Warfare - which ran at 1024x600 at 60FPS on PS3 and Xbox 360, with MSAA (a form of anti-aliasing, a technique to clean up a 3D image) on top. By comparison, MW Reflex ran at about 640x480 - even in 16:9 widescreen mode; Wii games could not run at a full 853x480, so widescreen games on Wii simply had to be squashed into a 640x480 target then stretched back out to a 16:9 ratio - at 30FPS, and despite halving the pixel count and having twice the time to render the scene at 30FPS instead of 60, the Wii game still lacked in many areas. Cutting edge shading techniques were glaringly absent when looked at in comparisons, and the resolution deficit as well as a lack of anti-aliasing gave the game a rougher look which lacked the sharpness, responsiveness and cleanliness of the other consoles.

    I mentioned shaders and this is an interesting case; the Wii has shader pipelines, but the Wii does not support traditional shaders. Rather, the Wii - like the GameCube before it - relies on a custom "TEV" unit (Texture Environment Unit). This doesn't do shading in a traditional way, such as pixel or vertex shading on other platforms; the TEV essentially changes the textures directly to give the impression of shading - it's difficult to say exactly how it does it as even Nintendo's official Wii graphics primer isn't very clear about it. The Wii can use up to 8 different lights at once and the TEV can use rasterising to change the appearance of a texture based on a defined colour. It's a bit of a strange system, but it worked in games such as Rogue Squadron for GameCube to produce convincing lighting and shader effects.

    The Wii supports textures up to 1024x1024, like the Nintendo DS. To get a bit more technical about video memory usage, the Wii has a hard 1MB cap on framebuffer sizes to limit developers to approximately 640x480 or 640x528 at maximum (eg. for PAL games), similar to the GameCube. The remaining 2MB of VRAM is used as a tight, fast cache, with the rest of the embedded GPU memory used as slightly slower texture memory. The Wii supports fairly standard features on the GPU, such as antialiasing, bilinear texture filtering and mip-maps. However, antialiasing even on a simple scene requires compromise; the highest possible resolution in the Embedded Frame Buffer (EFB, the buffer inside the 3MB VRAM) with antialiasing enabled is half of the vertical resolution, at 640x264. This along with the high performance penalty is the reason why many developers didn't use antialiasing, hence the unclean look of many Wii games.

    Nintendo were, without much doubt, the best developers for the Wii. Nintendo designed their Wii games with the only intention being to run at a consistent 60FPS (30FPS, in the case of Zelda games) on specific Wii hardware, unlike most developers. Games such as Mario Kart Wii, Super Mario Galaxy and Wii Sports showed Nintendo's proficiency and creativeness with stylised artwork to create a very nice looking game. Mario Galaxy in particular is often seen as a great demo for the Wii, and it's easy to see why; it's a fluid, 60 frames per second platformer with some great artwork and great use of comparatively limited hardware. Despite the Wii's limited shader capabilities, Galaxy and its world looks alive thanks to clever use of bloom and colour grading to create a convincing atmosphere in space... if that makes sense.

    [​IMG]
    Another issue many games had is disc size. PS3 games used Blu-ray discs, which were 25GB (single-layered) or 50GB (dual-layered). Xbox 360 and Wii both had the same problem - DVD discs. DVDs could only hold 4.7GB (single-layered) or 8.5GB (dual-layered). Xbox 360 suffered even more as 1GB of the ~8GB usable disc space was allocated to Microsoft's anti-piracy checks, meaning that astonishingly a Wii or PlayStation 2 game could hold more data than an Xbox 360 game disc. The Wii's total space was also somewhat restricted because Nintendo forced every disc to ship with IOS software - more on that below - even if the game didn't use it, though this shouldn't have However, the 360 had the advantage of hard drive installs, meaning games could download additional update data. The Wii's support for this was hampered by the fact the Wii had 512MB of built in storage, and though it is expandable with an SD card developers still couldn't make additional content that wouldn't fit in the Wii's storage or take up a large amount of it. Thanks to compression many games were still able to get away with it, as well as the fact that Wii didn't need high resolution assets like Xbox 360 or PS3 games did - but some games may have had to cut content to fit on a DVD disc.

    Software:

    So on a software level, what did the Wii have running under the hood? Many of us are familiar with the Wii Menu; an elegant piece of system software where you can play your disc games, installed titles and channels by waggling your Wii remote in front of the screen. But what happens underneath the hood of the Wii? Was DVD support for Wii ever going to happen? Why was Virtual Console so interesting? How small were WiiWare games? I'll answer all of these questions below.

    [​IMG]

    Underneath the Wii, the Wii has an IOS system which is a collection of software - such as USB + network drivers, WiiConnect24 and security - which is the backbone of the Wii. IOS runs on the "Starlet" chip of the Wii and operates independently of the main PowerPC CPU. This is shown with the Wii's standby mode; when the Wii is in standby mode (with a yellow LED light on the power button), the main hardware is powered off and only the ARM chip is active, since it controls WiiConnect24. There are about 50 IOS (plural), a few of which are stubs - these stub IOS are used by homebrew programs for backup game loading among other things, such as IOS 249. Strangely, IOS got new versions after every single change. You can also tell from earlier IOS how little they actually understood their own ARM hardware; many debugging routines were left in and parts were compiled on Cygwin, (which is a Linux environment running on Windows) and other parts were compiled on a random developer's Linux setup.

    So as you may know, the Wii didn't support CDs or DVDs - only Wii and GameCube discs, with GameCube compatibility dropped in later models as well. But did you know that there are clues left behind that show Nintendo were planning on accepting DVDs at one point? Early IOS have full DVD support built in - the SYSCONF, the configuration file that the Wii reads settings from, had a flag that enabled DVD support for a long time. A homebrew program called DVDX enabled DVD support on certain Wii models and WiiMC, a Wii media centre program, also accepts DVDs - though not all Wii drives will read DVDs.

    The Wii Virtual Console is also very interesting. The Virtual Console offered official, Nintendo emulation of decades of gaming history - which was intriguing for a few reasons. Firstly, it was interesting to see Nintendo develop emulators themselves - and it was great to see them provide a gateway to their old titles. However, the drawback is that the Wii wasn't powerful enough to emulate a few Nintendo 64-era games at full speed such as Perfect Dark and Donkey Kong Country, so games that could have potentially made the cut were scrapped. It again makes you think about whether Nintendo could have gone a bit further with their hardware.
    But what's really interesting is that Nintendo, who are profoundly against piracy and downloading ROMs off the internet, actually sold Virtual Console users pirated copies of Super Mario Bros. for NES. If you open a decrypted version of Super Mario Bros from the Virtual Console with a hex editor, you will find that there is a ".NES" header in the file. That's not enough to prove anything on its own, but suggests the ROM has existed and been used with an emulator. But the version of Super Mario Bros used for the Virtual Console release was completely identical to an online copy, and since there should be minor variances between copies of the game, this confirms it was downloaded from the internet. The creator of a very early NES emulator called iNES, Marat Fayzullin, explained what the problem was:
    "There are minute differences between ROM dumps," explained Fayzullin. "Depending on the cartridge version and how it has been dumped. If you see that your .NES file DOES NOT match any of the ones found online, it is likely to be their own ROM dump. I have cut the ROM content out of the Wii file you sent me and it indeed matches the .NES file found online."
    Nintendo commented on this and denied these allegations but Nintendo decided not to give an explanation for these discrepancies, which leaves us with the conclusion that there is no explanation other than that this is an illegally obtained copy. As Nintendo themselves said about piracy, "It's that simple".

    On a final note about Wii software, WiiWare was Nintendo's selection of bite sized games released by mostly indie publishers. WiiWare games such as Fast Racing League and World of Goo were enjoyable and didn't break the bank like a big-budget title. But did you know how restrictive they were on developers? A WiiWare game could be no bigger than 40MB, which is really quite tiny. A WiiWare version of the popular indie title Super Meat Boy was cancelled after the developers refused to compromise on quality to fit their game within WiiWare size limits. However, games designed specially for WiiWare were able to achieve surprising results; Shin'en, who proceeded to produce Fast Racing NEO/Fast RMX and Nano Assault Neo on Wii U/Switch, were one team who were able to achieve incredible results. An example of their expertise was with the game Jett Rocket, which features lush environments, slick animation and nice artwork all within the limitations of WiiWare size constraints. It's a great effort which arguably rivals some of the bigger budget Wii titles while being easy on your Wii's storage and your wallet.

    [​IMG]

    Conclusions:

    The Wii was a strange beast which ended up becoming the dark horse of its generation. Despite its relatively weak hardware, which did cause issues, many developers produced unique experiences which people of all audiences enjoyed and the Wii was a great example of how "more advanced" may not necessarily mean "better".


    Thank you for reading this lengthy post! If you've enjoyed this and/or learned something from this post, tell me what you enjoyed or what could have been improved in the comments and leave a like. Don't forget to leave any suggestions for future posts in the comments as well!
    Jylean, Illuminaticy, banjo2 and 10 others like this.
    TheMrIron2 Hey guys. Before I begin I'd like to thank you all for the feedback and positive responses to my previous and first Tech Talks blog, posted back at the end of April. I've been busy with class and I have actually tried to write up one of these since - twice - but I was doing it on mobile and one slip of a finger made me go back a page and blog posts don't have auto-saved drafts. Oh well.

    The jump from 2D to 3D graphics in gaming was undoubtedly one of the biggest technological revolutions in gaming history. Entirely new experiences were made possible and new genres spawned from the birth of 3D gaming. On PC, games like DOOM and Wolfenstein 3D attempted psuedo-3D graphics in 2D engines, but it was Quake that brought true 3D graphics to PC gamers in 1996. It made hardware manufacturers scramble to produce hardware capable of and competent at 3D rendering. Consoles received true 3D graphics at around the same time, with team Nintendo delivering Super Mario 64 in 1996 along with Turok and Goldeneye 007 in 1997 while Sony released Crash Bandicoot and Tomb Raider on PS1 to trade blows with Nintendo's N64. However, handhelds were still behind the curve; Atari's 1989 Lynx was still more powerful than any of Nintendo's handhelds for the rest of the 20th century until the GBA released and the Lynx couldn't do much better than psuedo-3D.

    [​IMG]
    [Above: Blue Lightning on Atari Lynx, one of the system's most graphically convincing games]

    The GBA was the first handheld with the ability to really pull off 3D graphics. Attempts had been made before, such as Faceball 2000 for GameBoy (thanks JellyPerson for an interesting example!), but nothing with the sophistication of GBA 3D. Of course, the GBA wasn't perfect and was still primarily a 2D system, but games like Asterix and Monkey Ball had true 3D graphics.

    [​IMG]

    It wasn't long until a system that was genuinely designed for 3D was announced though; Sony annouced the PSP around the time of E3 2003, and Nintendo officially announced the DS in 2004. Both systems released in 2004 and changed the handheld landscape - the DS offered a sizable and welcome upgrade in hardware compared to the GBA, making it a system capable of 3D of a respectable standard comparable to N64 games. The PSP was a more serious system with the intention to be directly comparable to a PS2 quality experience on the go. So with that said, let's see how the two systems as well as the GBA sized up against each other.

    [GBA]
    CPU: 16.78MHz ARM7-based CPU
    GPU: Custom 2D graphics core (not a discrete GPU - integrated into main processor core), 96KB VRAM
    RAM: 32KB RAM, plus an additional 256KB of DRAM outside the CPU
    Resolution: 240x160

    [DS]
    CPU: 67MHz ARM9-based main CPU; 33MHz ARM7 coprocessor for background processes and GBA support (note: ARM7 CPU is not available for running code by devs - can be accessed through libraries for sound etc.)
    GPU: Custom 2D graphics core and 3D core (managed by ARM9 CPU), 656KB VRAM (total)
    RAM: 4MB
    Resolution: 256x192 (both screens)

    [PSP]
    CPU: 222-333MHz MIPS-based main CPU; identical secondary CPU (for decoding etc, with programmable sound capabilities); 1x Vector Unit (in simple terms, a coprocessor for certain tasks, usually related to number-heavy tasks) @ 3.2GFLOPS
    GPU: 111-166MHz graphics core, 2MB VRAM
    RAM: 32MB (usable for games; PSP-2000 and later had an additional 32MB for system tasks only)
    Resolution: 480x272

    So as you can see, the PSP wiped the floor in terms of specifications; it was clearly a closer match to 6th generation consoles compared to DS, which was closer to PS1/N64 tech. But what did this result in?
    If you compare games on both system, it's clear; PSP received direct ports from PS2 whereas DS usually had to have its own specialised version of a game which was cut down to suit the DS. For the sake of comparison, I have included two games often heralded as technical showcases for their respective systems; Metroid Prime Hunter on DS and God of War: Ghost of Sparta on PSP. While this is a bit of an apples-to-oranges comparison, it does give an idea of the sort of visuals and tech that both systems were pushing.

    [​IMG]

    [​IMG]

    (Note: I couldn't find an image that did Hunters justice at original 256x192 resolution so I opted for a slightly higher resolution image)
    Metroid Prime Hunters was one of the closest games to GameCube standard on DS. It had nice dynamic lighting and some great textures and detail that really made the game shine on DS. It had online multiplayer and is considered to be one of the best action and adventure games on DS. The game's biggest drawback compared to the GC Metroid games was perhaps the less than incredible control scheme, but it is still regarded as one of the best looking games on the system.
    God of War: Ghost of Sparta is one of the somewhat few PSP games designed for the full 333MHz mode. It's a beautiful game with a lot of detail that maintains a very high visual standard that comes very close or matches the PS2 God of War games with a strong 60FPS framerate, though it is prone to regular consistent drops.

    Despite the hardware differences, sales tell a different story. PSP sold 80 million, while DS sold over 120 million - about 155 million if you include DSi models as well. While the PSP was no failure, it failed to keep up with Nintendo's sales. The DS was simply more friendly to more casual players, and lacked issues the PSP had such as proprietary memory cards and the slow, grinding UMD discs.

    So, the verdict: what handhelds did it right? Between PSP and DS, it's personal preference; DS offers unique dual screen experiences that can't be replicated on a PSP, but PSP is a much more technologically advanced system. GBA was really the first true landmark system for 3D graphics, and while 3D GBA games were few and far between, they definitely existed and they should be accredited for being some of the first ever 3D games on a portable console.

    Thank you for reading this lengthy post, and I apologise for the gap between my first article and this one. If you liked this, have any feedback/constructive criticism or want to see more please let me know in the comment section and if you thought this was a good article, don't be afraid to give it a like as well.

    (Update: I'd also be interested in hearing what you want to see next.)
    Illuminaticy, VinsCool, Old and 3 others like this.