Why PowerPC?

This section is for people new to the PowerPC architecture (also called Power Architecture).
If you are an expert already, you can enrich this page. Please send us any information you want to be published here. Thank you.

 

The PowerPC architecture design is newer than the other successful CPU architectures:

X86 – 1978

MIPS – 1981

ARM – 1983

PowerPC – 1991

From the beginning, PowerPC was designed with more features than other CPUs.

The Power Instruction Set Architecture is called Power ISA and is in continuous evolution.

Power Architecture (PowerPC) scales from embedded uses to large server clusters.

Specs in short

  • 64-bit architecture with a proper 32-bit subset
  • Wide vector instructions with large register file allow efficient data moving without use of off-chip memory
  • RISC architecture introduces Superscalar concept of multiple execution units: Branch, Fixed Integer, Floating Point
  • AltiVec SIMD vector processing
  • ISA 2.04/2.05/2.06 support multicore/multithreading, virtualization, hypervisor, and Power Management

Market Diversity

  • Automotive – from Powertrain, Body, and Chassis, to Safety and Infotainment.
  • Computing – From volume servers to the fastest and most resilient enterprise servers
  • Consumer – core technology for innovative game consoles (X-Box 360, Wii, PS3)
  • High Performance Computing – Sequoia, the IBM BlueGene/Q system
  • Aerospace
  • Wired Communications
  • Wireless Communications

Altivec Accelerator SIMD

  • AltiVec technology is a vector or Single Instruction Multiple Data (SIMD) architecture that allows the simultaneous processing of floating point and integer data in parallel.
  • Standard from Power ISA 2.03, developed from 1996-1998
  • consists of thirty-two, 128-bit architectural registers and 16 additional vector renaming registers.
  • The e6500 core includes a 16 GFLOPS AltiVec technology

Specs in Depth (more info)

  • Fixed-width 32-bit instructions to ease their decoding
  • Load-store model, all operations are done within registers
  • Large number of registers (32 general purpose registers and 32 floating point registers)
  • Atomic (or exclusive) load-store instructions for use in a multicore context
  • Big-endian order with the ability to work in little endian
  • 64-bit architecture with the of instructions specified behaviour for this mode
  • No specific role for general purpose registers (r1 used as the stack pointer is an ABI choice, not an architectural one)
  • MMU model is not defined, it is implementation specific, with 2 global models for use in embedded devices or servers

 

Why is PowerPC Only Adopted in the Consumer Field for Game Consoles?

For PC operating systems with existing applications (many of them proprietary), compatibility must be maintained in subsequent generations of CPUs.

When the first PowerPC was built (1993), most software was proprietary and most of applications were written for x86 or Motorola 68k CPUs.

Thanks to Free Software, it is now possible to run the same programs and OS recompiled for PowerPC, so we are no longer forced anymore to use an old CPU architecture.

Game consoles have a minimal operating system with a few small embedded applications.

Games are typically written from scratch or are developed on cross-architecture engines. This means that CPU changes affect them less.

[to update]

 

35 thoughts on “Why PowerPC?

  1. A bigger question with PPC, does it ( the processor still stand up to, the intel models and clones of today?

    I will admit most of anything on an Windows machine ( side by side ) will fail compared to it’s PPC counter part, but the biggest issue is the drop of support for Tiger from devers, and programmers.

    So far my PPC machine ( at least when i had one ) is able to jump through hoops compared to any Intel, even the intel all-in-one machines are terrible compared to it’s PPC counter parts

  2. Look at the shortly released Amigaone X5000….It’s powered by a P5020 and P5040 e5500 core…It’s a high end computer targeted for games development and Multimedia…It is indeed not the fastest machine availabe in the computerworld, but doing its job quiet well…clean design and quality components makes it a good choice, not only in the Amigaworld..:-)

  3. I forgot to mention that, after the release of the forthcoming Amigaone 1222 (tabor) running a dualcore p1022 e500m core…the Amigaone TX is coming,
    featuring T4XXX e6500 core range of cpus…:-)

  4. I’m surprised a project for a more open laptop would go for PowerPC and not consider OpenSparc or RISC V. Both OpenSparc and RISC V are open source and royalty free but I don’t believe this is completely the case with PowerPC??? With the full designs for OpenSparc and RISC V, you can prototype using a FPGA and run an ASIC when complete. Also, it seems like you’re implying that the Debian port of PPC is incompatible because of certain variations?

    Don’t get me wrong, this is an awesome project and I hope you succeed but it seems like you’re making things harder for yourself by choosing a less open and less produced CPU.

    If I’m wrong, I’d love to hear a rebuttal.

    • Would one also need to use an open hardware design FPGA with all open-source toolset to satisfy your definition of open hardware?

      • That would be ideal, yes. Complete user-accessibility (let alone producer) to every design detail from highest level software interface to the traces in the silicon and why particular fabbing processes are better than others, etc. End-to-end. Including the wafer lithograph projectors.

        If USGOV was actually serious about national and consumer security, this would be how the rudder would be set on digital technology. At the very minimum, not one product line should be sold where a computer exists inside it, which reads code from write-able/”update-able” memory, but either directly blocks writing that memory (beyond needing something simple, like JTAG) or blocks loading “unsigned” code from that memory AND fails to provide any way for the user to purge OEM keys + add his own keys (modern Intel processors). Devices which do not conform to this should have a 100%+ sales tax levied on the end-product, AND be required to have something similar to EAL certification done by the final step pre-retail OEM (to prevent circumventing by component compartmentalization). Effectively open Android, iPhone, Windows 10 on ARM laptops, cellphone basebands, SmartTV operating systems, etc, or don’t sell it in America.

        10 years of corporate revolts to this creating a product vacuum, and everything should balance out where OPEN-AMERICAN could be the selling point which both secures the US’s economic and technological future worldwide dominance, and it will starve the proprietary beast for any products which will be sold to the average consumer.

  5. Guilherme G. Piccoli

    Perhaps one of the most interesting properties of POWER nowadays (IMHO) was not mentioned here: the Open Source firmware stack, allowing a developer to have entire control (and customization capabilities) of the machine. The full FW stack is available on Open Power GitHub, and recently was discussed in last FOSDEM (https://lwn.net/Articles/715817 – “The POWER of open”).

    At least for me, this is one of the most relevant features that makes me want a PowerPC notebook.

  6. Should Amiga and Amiga-like systems be able to run on the proposed design? I’m thinking classic Amiga but also OS4 & MorphOS.

  7. Have you considered the Intel Titanium IA64? This looks to be a decent offering of Intel to move away from x86 with a modern RISC architecture derivative of HP PA-RISC and supports big endian. You can have an Intel computer that isn’t a PC. LOL. 😀

    On a more serious note, how easy will it be to install another OS aside from Linux with a dual boot menu? Or indeed multiple Linux installs? Will the firmware provide this functionality so an OS can simply install files or even a bootloader?

    Apart from that, despite running on PowerPC, will you be forced to “sell out” to commercial pressure and run it in little endian PC mode? When I saw how Debian and IBM did this I felt the Power architecture lost something and there was less point to the existence of a PowerPC big endian CPU. It also looked like the code was the typical non-portable code that gets written today because they don’t like WYGIWYG for some reason and there is too much effort in “fixing” it. I like the project but don’t like ppc64intel. 😉

    • Most PowerPC boards, including this (still vaporware) one, run OpenFirmware. Simply “installing files” is problematic since file systems differ between OSes. I’m pretty sure OpenFirmware can boot from any partition and supports GPT (for lots of partitions). Read up on OpenFirmware. It’s been over a decade since I’ve used it (on Motorola and IBM power PC servers!).

      The PPC is both LE and BE. Applications choose which is in effect at any time, and can switch back and forth freely. The firmware is irrelevant. Typically, the OS compilers and libraries will default to an ABI using one or the other consistently. For instance, Fedora offers both ppc64 and ppc64le (it’s just a matter of recompiling everything). So no worries – only you can choose an evil LE OS!

        • Yes I’ve heard of this. And you can’t trust GCC to turn a ((word >> 8) | (word << 8)) into a lhbrx. I suppose to be fair and avoid calling it lazy it would need a register-to-register swap which AFAIK PPC doesn't have. But it should be done upstream in word read. I think compilers need a specific __attribute_little_endian__ here to mark data as such. Then again the PC guys probably wouldn't use it and we would be no better off and still have all this LE only code. 🙂

          • The endianess is completely invisible without taking an address. This is why Java code doesn’t care about endianess. The ((word >> 8) | (word << 8)) works exactly correctly regardless of the endianess of the memory *unless* you have read it in from some external source – in which case you took the address!

            Yes, you might be able to construct some tricky memory aliasing that fools the compiler – but that causes bugs even when endianness is consistent. Even consistently LE.

  8. I had replied earlier but it didn’t post which is disappointing as it was from PPC (Webkit) so hope fully PPC Linux works better. Yes with what I was discussing was a little endian structure read in from disk. Aside from compiler built in swaps, which should provide most efficient byte swapping, for PPC it can read a pointer “backwards” so being able to instruct that implicitly would be good. It gets messy when it is tested separately and inefficient conversion routines used. I’m not aware of coding in such a way as to be able to generically read a little endian pointer so it’s transparent in code. I’ve though about doing it in unions or attaching function pointers to a struct representing one data word which could work well but then all reads and writes would need to go through this method.

    The other problems I see in code is where the coder assumes the CPU is little endian by reversing bytes in long words. I see this as wrong for a few reasons. By assuming the data needs to be reversed it has gone from high level to a lower level. It becomes unreadable. It isn’t portable. And, it is unnecessary. A common example is a WAV filter. Now a WAV or RIFF header is made of chars so a cheat is to read a long word. Which is another thing wrong, really. And only works correctly on big endian where string data is ordered in correct order when read as a whole word. Now, all this can be avoided by invoking a macro that takes all four chars in the header and turns it into a long word such as with a MAKEID macro. This is readable from the onset. Keeps the code cleaner. And most of all, it is portable. The coder doesn’t need to think about, the compiler does the low level work. But they don’t do that.

    Finally, the other issue, is with graphics drivers that are coded to use ARGB format but break because the newer chipsets only accept BGRA. This is causing trouble for 16-bit pixels as well, since the bytes are reversed. This is obviously used because of the popular endian. as logically it doesn’t make sense to split up 5-bit or 6-bit values so they are spread up unevenly and broke in two bytes. Apart from this is web browsers where things like data tables in a JS interpreter only work on little endian since they somehow coded it that way. So, running as little endian works around these problems. But the code remains broken. 🙂

  9. If this project will truly lack an Intel ME equivalent that is going to be a very BIG bonus and should be marketed accordingly.

Leave a Reply