Post by mjuarez on Sept 23, 2012 3:53:37 GMT -8
So, I've been thinking about this for a while now. I thought I'd post this and ask for some feedback.
Back when Apple decided to go with Intel chips inside the Mac, it seemed like a huge and very bold bet. Apparently, this had been in the works for years inside Apple, but it was only shown to the world at the 2005 WWDC. Even though there were a few issues, and the usual complaints and questioning by pundits in the media, the changeover to Intel turned out to be relatively painless. And it enabled much faster growth for the Mac segment in the years ahead.
Think about it. Apple transitioned an entire operating system (and its corresponding application/user base), going from a Motorola/IBM architecture (PowerPC), to a binary incompatible one (Intel's x86). Over the next year or so, the transition was basically done. So, how the hell did they manage to do this?
OS X and iOS are running mostly C and Objective-C code, compiled and bootstrapped by something called llvm, an open source project. LLVM allows a higher level of abstraction, so that you can basically "direct" the compiler to create code for one or other CPU architectures. Without changing your code. Apple has been using this in all their products for years now (it's an integral part of their XCode IDE), and employs many of the founders and main contributors to the llvm project.
One of the reasons why the transition to Intel chips was so seamless, was because OS X had been developed and compiled on top of LLVM for a few years already, so by the time they announced the decision, they simply flipped the switch. Using the same exact codebase, LLVM generated code that could be run by Intel chips, instead of PowerPC ones. No code changes were needed, just a simple recompile.
By the way, this is exactly the way Apple's iOS works today. It has been stripped of things it doesn't need, but it's basically the same OS X codebase, targeted to run on Apple's Ax chip architecture. When you're developing apps for iOS, XCode is simply telling LLVM to target Apple's Ax architecture. If you develop an app for OS X, it switches back to Intel's x86.
So, having said all that, I'm pretty sure Apple is thinking even bigger and bolder for the future. With the latest A6 chip, Apple has shown that it has the talent in-house to build bleeding-edge custom CPU and GPU cores, without having to rely on third-parties. So, what's stopping them from creating a new desktop-class of chips?
The benefits going forward would be manifold:
I'm pretty sure I haven't even launched VMWare's Fusion in months now, which means I definitely don't need x86 compatibility in my Mac anymore. Most people I know are on the same boat, once they go Mac, there's just very little point in going back to Windows. This is not true for all users, mind you, but 95%+ of the market would not even know or care if the chips inside Macs changed, as long as all their OS X apps worked the same.
Maybe this won't happen until 2014 or even later. But I'm pretty sure it will happen. Just like when they ditched PowerPC back in 2005, it would be a pretty bold move, but I'm pretty sure they have what it takes to pull this off.
Comments/feedback?
Back when Apple decided to go with Intel chips inside the Mac, it seemed like a huge and very bold bet. Apparently, this had been in the works for years inside Apple, but it was only shown to the world at the 2005 WWDC. Even though there were a few issues, and the usual complaints and questioning by pundits in the media, the changeover to Intel turned out to be relatively painless. And it enabled much faster growth for the Mac segment in the years ahead.
Think about it. Apple transitioned an entire operating system (and its corresponding application/user base), going from a Motorola/IBM architecture (PowerPC), to a binary incompatible one (Intel's x86). Over the next year or so, the transition was basically done. So, how the hell did they manage to do this?
OS X and iOS are running mostly C and Objective-C code, compiled and bootstrapped by something called llvm, an open source project. LLVM allows a higher level of abstraction, so that you can basically "direct" the compiler to create code for one or other CPU architectures. Without changing your code. Apple has been using this in all their products for years now (it's an integral part of their XCode IDE), and employs many of the founders and main contributors to the llvm project.
One of the reasons why the transition to Intel chips was so seamless, was because OS X had been developed and compiled on top of LLVM for a few years already, so by the time they announced the decision, they simply flipped the switch. Using the same exact codebase, LLVM generated code that could be run by Intel chips, instead of PowerPC ones. No code changes were needed, just a simple recompile.
By the way, this is exactly the way Apple's iOS works today. It has been stripped of things it doesn't need, but it's basically the same OS X codebase, targeted to run on Apple's Ax chip architecture. When you're developing apps for iOS, XCode is simply telling LLVM to target Apple's Ax architecture. If you develop an app for OS X, it switches back to Intel's x86.
So, having said all that, I'm pretty sure Apple is thinking even bigger and bolder for the future. With the latest A6 chip, Apple has shown that it has the talent in-house to build bleeding-edge custom CPU and GPU cores, without having to rely on third-parties. So, what's stopping them from creating a new desktop-class of chips?
The benefits going forward would be manifold:
- Complete customization. Right now, INTC creates their chips mostly based on what MSFT/Windows requires to run. Right or wrong, MSFT still controls 90%+ of the market, so that's what INTC will target their chips for. If it had complete control of the design, Apple could remove stuff that doesn't help OS X, and add more pipelines, bandwidth, etc, where it knows OS X would benefit from it. Or target for efficiency instead of performance. We've already seen this with the A6 inside the iPhone5. It's a custom design, based on ARM CPU license, that allows Apple to optimize specifically for their needs.
- Intel is probably the only supplier that knows Apple can't go anywhere else to buy their wares. Almost everything else is fungible to a degree, from RAM, to storage and broadband/radio chips. This gives Intel leverage, and that is not good for Apple's margins.
- Potentially major cost reduction. Intel knows Apple can't go anywhere else, so it can definitely charge premium prices for its chips. An Intel chip can easily be $150-$200 in bulk. When you're talking about bringing Macs down under $1000, that becomes a major problem. Being able to manufacture a chip for a marginal cost of $40 instead of paying $200, would make a big difference.
- Most important of all for Apple at this point: Complete control. Apple has been burned badly by suppliers before. Motorola and IBM weren't willing to invest more money in PowerPC throughout most of the last few years, and so Apple suffered. Intel being unable to deliver the latest Xeon parts played a role in the Mac Pro being delayed at least once in the past. If Apple controls the CPU design, they can outsource it to multiple semi fabs all over the world, lowering the risk of supply chain disruption even in the case of some disaster (e.g., tsunami last year).
I'm pretty sure I haven't even launched VMWare's Fusion in months now, which means I definitely don't need x86 compatibility in my Mac anymore. Most people I know are on the same boat, once they go Mac, there's just very little point in going back to Windows. This is not true for all users, mind you, but 95%+ of the market would not even know or care if the chips inside Macs changed, as long as all their OS X apps worked the same.
Maybe this won't happen until 2014 or even later. But I'm pretty sure it will happen. Just like when they ditched PowerPC back in 2005, it would be a pretty bold move, but I'm pretty sure they have what it takes to pull this off.
Comments/feedback?