It might not be so much difficulty in porting but not worth the effort for the sales the games will get (this need not be profit but the dev time could be spent somewhere else to get a bigger profit somewhere else).
To answer the question though powerpc is about as broad as saying Intel or even ARM -- they might share some of the same companies and design philosophies but their actual implementations can be radically different. Indeed the 360 is a fairly standard powerpc but the PS3 is a cell which has these things called SPEs (synergistic processing elements if memory serves) which change the game quite a bit.
Beyond that architecture of the CPU tends not to matter half as much as it did in years past where you might be squeezing every drop out of them and that meant both knowing the processing and coding to it (something most coders are no longer taught let alone code for, if they are even allowed to code for it). Broader architecture choices can still matter but that can be covered later.
I am not sure quite how to tackle this more fully without going into a general assembly programming, modern graphics programming and computer architecture discussion.
I can at least go in for a partial glossary sort of thing consisting of terms I have seen floating around the last few weeks/months
Floating point operations
Computers tend to work best if you use whole numbers, the dominant way of doing decimal point numbers is called floating point (the DS uses a slightly different method called fixed point, others use BCD, others use log tables and the is also the work around it and "do not put it into the calculator until you have to" method). Depending upon the maths you are doing it is actually pretty poor so you get single precision, double precision, quad precision and further varying grades which allow for bigger numbers to be represented more accurately. Higher precisions may or may not be supported or as fast but they are needed for better graphics, movement, physics.....
The speed is measured in FLOPS aka floating point operations per second and that is why people like that number in specs lists.
SIMD -- mainly as the Wii U SIMD operations were trashed by Marcan and co. I do not know if you are familiar with arrays but they are a core component of computing these days (
http://mathworld.wolfram.com/Array.html has some but nicer to read stuff exists elsewhere). Without getting into the boring stuff they look as follows
(1 1 7)
(87 88 44)
(14 487 98)
If you wanted to multiply that by 4 you multiply each of those numbers, a traditional processor might have had to do this one at a time and that would take ages. SIMD allows you to do it on a the lot at once. This is why video started being a thing around the introduction of MMX way back when. At times I would probably place these ahead of floating point in the list of things a games/multimedia machine has to be able to do. The above is somewhat simplified and for an array like that it is not so bad but start mashing multiple arrays together and doing proper array maths (especially if the array holds floating point numbers) and it gets ugly. SIMD = single instruction, multiple data.
Multicore -- If I have one processor screaming along at 3 GHz I can double my speed if I have two... actually no but it can get pretty close though the performance rapidly drops off when adding more and more general use processors to the point where it eventually becomes unproductive (see Amdahl's law). This leads to things like the SPE mentioned for cell and the GPGPU stuff I will cover next. Also coding for multicore, especially in an arbitrary/unpredictable environment like games, is a right pain as you do not know what is going on in the other core that easily, you might have to wait or face not having a number to work with (see race conditions) and generally you have to wait for things to happen before you calculate them. The Wii U stuff is considered pretty poor by all accounts where the others do better, it is not fatal but not ideal.
GPGPU -- 3d graphics like floating point numbers, no way around it really. As general purpose CPUs were not getting good enough at a fast enough pace we got graphics cards (GPU) that did it for us. Several years later people realised that graphics cards were powerful little things in their own right and it might be nice to code for them. Graphics card makers opened up accordingly (it was not the first but you might have heard of CUDA) and even added a few options for things graphics cards tended not to have to do (some general purpose instructions, hence GP-GPU). The added a few options thing was nice as there is a serious performance hit associated with piping data between general memory and the CPU, general memory and the graphics card and the CPU and the graphics card. However much like normal multicore (graphics cards can have hundreds of cores) it is a right pain to do well which is why few ever have really (however many years on this is and I still do not have a graphics card H264 encoder I can trust to make quality video). That performance hit between the CPU and memory is known as a cache miss (too much data to hold in the CPU so it has to go looking in memory) and is why some are talking about the cache on various systems being larger.
The SPE stuff is kind of odd and is like a cut down processor in the CPU itself which dodges issues with having to fire data around and dodges many issues with having to manage large amounts of processors -- it was pretty radical at the time and did offer a massive performance boost if you could use it, again most programmers are not too capable of this and Sony fluffed the SDK so not many could truly take advantage of it.
Graphics can also be quite different in what they expect in, what they will do and how they work (shaders are things in the graphics processor but shader languages are programming languages almost unto themselves).
out of order execution. I would largely accuse it of being a buzzword but I will cover it anyway. Processors basically do one instruction after the other until there are no more numbers that need crunching. Though as mentioned above you do have to wait for things to happen but after that might be some simple numbers that need crunching, here the processor will hopefully do the stuff it is waiting on/will help something else and go back to doing whatever it sidelined. Related to this is pipelining and branch prediction. This is also a reason why few people do assembly level/architecture influenced coding any more as this stuff is really hard to do in your head but the compiler can do OK at it.
That over for all that I said about it not mattering, coders not coding to the hardware an such it kind of still matters and if your processor does not handle certain types of operations so well and your game engine uses a lot of them (you use a lot of double precision floats but the performance with them is not so hot...) and you get to tweak a lot of things. If you have different types and philosophies when it comes to memory (most consoles do not have the most, even if you consider they do not have to run a whole OS underneath it all) in terms of splits between general memory, graphics memory and whatever else and you designed your game to work better on systems with general memory then even if you are on the same type of processor you are in trouble. Given general computers now far outperform consoles and you can just say "just get a better computer" to any customer it becomes far easier to go to x86.