Talk:Apple's transition to Intel processors
|↓||Skip to table of contents||↓|
|This is the talk page for discussing improvements to the Apple's transition to Intel processors article.|
|WikiProject Apple Inc.||(Rated C-class, Top-importance)|
|WikiProject Intel||(Rated B-class, High-importance)|
- 1 Rename proposal
- 2 POV
- 3 Rewrite needed
- 4 Emulation/Virtualisation
- 5 Comparing Mac / Windows
- 6 Viruses & Virtual Machines
- 7 ABI?
- 8 Boot sector viruses
- 9 Extensible Firmware Interface
- 10 New article
- 11 The commercials?
- 12 Advanced Computation Group
- 13 Video of the announcement
- 14 Registers in Intel Core?
- 15 A simple question:
- 16 Merom in July
- 17 Not sure about this statement...
- 18 fair use / free image
- 19 Timeline Update
- 20 Open Firmware vs. EFI
- 21 Bias
- 22 Rewrite needed
- 23 so why exactly did they switch?
- 24 Wrong statement
- 25 Requested move
- 26 AMD's lack of low power CPUs?
Shall we call this article "The PPC-x86 transition of Apple Macintosh"? Apple is more than Macintosh. Intel is more than x86. And how about "migration"? -- Toytoy 07:22, Jun 14, 2005 (UTC)
- "Macintosh x86 migration"? --Ihope127 22:44, 3 September 2005 (UTC)
- I support renaming. It's the Macintosh, not Apple, that is moving to Intel. "Macintosh Intel migration". jamiemcc 19:23, 11 March 2006 (UTC)
I believe there's an insanely huge POV or RDF hidden inside this article:
- Motorola and IBM failed to deliver a 3 GHz chip so Apple has to jump the boat.
This sounds non-sense and Apple-centric to me. If I am right, the Macintosh marketshare has been falling for the past few years. Anyway, Apple's marketshare has always been miserable. If you cannot let IBM or Motorola earn money, you don't get your chips. No one drinks Cool-Aid this time. This is exactly Apple's situation today. The money flow to nurish newer PPCs drained. That's why Jobs gets nothing. 68k failed. Now PPC, as a personal computer CPU, also fails.
Apple's marketshare has always been too small to support its hardware advancements. SCSI failed. ADB failed. NuBus failed. ADC failed. LocalTalk failed. The list goes on and on. There exists an established pattern of hardware standard failures. I think IBM and Motorola are Jobs' scapegoats.
Is there a foundmental problem with the PPC design? If PPC makes money, why don't IBM and Motorola spend more money on the R&D? Fact: Apple fails to keep them well fed. -- Toytoy 06:11, Jun 14, 2005 (UTC)
- I wouldn't say any of those "failed" except maybe in the context of the greater PC world. Parallel SCSI in particular became an important standard because the Mac embraced it early on. The others got replaced mainly because something better came along; ADB begat USB, NuBus was replaced by PCI, LocalTalk was replaced by Ethernet. ADC is really the only non-starter on the list. The same thing applies to PowerPC, in this case; it's by no means an abject failure, but it's not the best tool for the job at the moment, and I think Apple is smart for realising that. -lee 05:52, 22 Jun 2005 (UTC)
- That's completely false. Apple's marketshare has been rising for the last several years. Their revenues, unit sales, and profits are the highest they've ever been, and this isn't due exclusively or even mostly to the iPod and iTunes: those are simply publicity grabbers.
PowerPC most certainly did not "fail". Apple, IBM and Motorola created it together, and it's an extremely good performer in many categories. It has nothing to do with Apple keeping IBM and Freescale (formerly Motorola) "well fed". Apple buys millions of dollars worth of CPUs from both companies. However, IBM's (and Freescale's) focus is on an even bigger market for PowerPC: embedded communications, networking, vehicle controls, speciality devices, and so on, and now, all three of the major next generation gaming consoles. PowerPC powered all of Apple's products for over a decade, and continues to be heavily used by IBM in high-end servers and workstations. So while yes, I guess you could semantically argue that as a *desktop* processor, PowerPC "failed", I'd beg to differ. It's just time to move on. Did 80x86 "fail" because it was time to move to Pentium?
Further, yes, in the early days, the Apple hardware platform was very proprietary. However, I take great issue with the basis of some of your claims. Things like ADB and LocalTalk were around before there were any comparable standards to even do those jobs! The rest of the industry may not have picked them up (in part because of Apple's early closed systems), but things like LocalTalk and ADB, and even NuBus, were light years ahead of other competitors. At the time, SCSI was picked because it was the better technology: IDE/ATA was not a clear winner at the time, and SCSI was clearly better. Market dynamics and economics ended up meaning that when the PC industry at large picked IDE/ATA, it ended up being the winner because the market forced prices down, and PCs were, and continue to be, all about price and being a commodity. Even today, SCSI didn't "lose": it's still used in high performance applications where speed and other performance factors are key. Only today is SCSI being outmoded by other technologies. But I will concede that it lost *on the desktop*. But that's not the fault of Apple picking the arguably superior technologies, and where they didn't exist, creating their own. Moreover, Apple's inclusion of USB in the iMac, and the deletion of the floppy and all legacy ports, is viewed as one of the largest catalysts for USB in the entire industry. The PC industry never was so bold. ADC was a damned good idea: dual channel DVI and analog (you could use an ADC connector with DVI or VGA displays), integrated with power and USB and FireWire in one cable. It was completely open. But it never took off. Now, Apple has standardized on DVI.
Today's Macs are a virtual who's who of international and open standards in the hardware and software: ethernet (including the first mass market machines to ship with GigE), PCI, DVI, HD-15 VGA, USB, FireWire (IEEE-1394), Open Firmware (IEEE-1275), 802.11/WiFi, Bluetooth, PCI, PCI-X, ATA/SATA, etc., even implementing some standards before anyone in the PC industry has. You take WiFi for granted now; look where Apple was with AirPort two *years* before any comparably priced offerings were available in the PC marketplace, to say nothing of the additional two years it took to get remotely comparable ease of use with Windows XP SP2. The OS is based on completely open standards whenever possible, and while the whole OS itself is proprietary, the entire core OS is open source. Mac OS X is the single most desirable operating system in many scientific, life/bio-science, engineering, and even some IT marketplaces.
When Apple went to the G5 (IBM PowerPC 970), IBM promised 3GHz by 12 months from that date. Sure, no one can predict the future perfectly, but they missed that target by *over a year*. What was Apple to do? Now that Apple has removed the last vestige of what could even be remotely called non-mainstream hardware from their computers - namely, the CPU - and you still chastise them, while getting in factually incorrect jabs about how Apple's marketshare is decreasing; surely, Apple is around the corner from certain death! As it has been, apparently, for nigh on 30 years.
I'm not disagreeing that there wasn't POV garbage in the original article; what I'm saying is that your grossly overstated position is itself POV. Granted, it's not in the actual article, but neither is my reply. Apple absolutely switched from PowerPC because it was time. And it wasn't so much that they couldn't keep IBM and Motorola/Freescale "well fed", its that they couldn't keep them well fed at prices that would allow Apple to be more competitive in the general PC marketplace. This decision is nothing but a good one, and makes Macs essentially high-end, high-quality PCs. The ability to now run Mac OS X PLUS any x86 OS in a sure-to-exist virtual machine/vmware-like environment on a sleek Apple laptop (Apple hardware is consistently and continuously ranked #1, ahead of all other manufacturers, in quality, support, lack of need for repairs, and so on, by leading consumer organizations like Consumer Reports) will be a major coup for Apple.
In closing, your incorrect analysis is unfortunately somewhat common. I hope this helps to clear things up. - firstname.lastname@example.org
This article's problem isn't POV. The problem is the fact that it isn't an encyclopedia article; it's an essay full of opinion and analysis. It needs major work. Tverbeek 17:07, 19 Jun 2005 (UTC)
- I just finished up some heavy editing of the important bits, and I went ahead and deleted the whole Future section since it's not really relevant to the article, being mostly off-topic analysis. The Hurdles section could still use some rearranging, but most of the POV problems should be fixed now, hopefully. Let me know if there's anything I missed. -lee 05:45, 22 Jun 2005 (UTC)
"Virtual PC, a Windows emulation solution for Apple PowerPC sold by Microsoft, could now enjoy much more success with performance improved through virtualisation rather than emulation. For those customers wishing to achieve a more conventional environment, a dual, triple, or even quadruple boot solution would likely be possible on an x86 Apple device."
Not sure if thisbelongs in the article or not, but it's a fair bet that as soon as the official animal is on the market, a version of WINE will be built for the platform, too (in much the same way as there's a version built for Solaris x86) enabling direct use of Windows applications in OSX.
Unless, of course, that's included and makes WINE totally unnecessary. But I'll also bet anything that Microsoft'll do everything in their power to prevent that from happening.
Dodger 07:55, 23 December 2005 (UTC)
- I second that, this article is a lot of words and little content. I came here searching for compatibility questions and find an article about customer perception in hard times of change in the Jobs cult. Gotta love those Mac egomaniacs. —Preceding unsigned comment added by 126.96.36.199 (talk) 05:59, 15 December 2010 (UTC)
Comparing Mac / Windows
I added a note about how Macs will be protected from Windows viruses because it doesn't give users admin access by default but it was reverted. It's cited in the Windows XP article in the criticisms section, so I'm going to put it back in. 188.8.131.52 22:04, 29 December 2005 (UTC)
- It is also to note that even being an Administrator, to change something of the system, or install something with "System" privileges you MUST input the administrator password (unless you hack it, but you shouldn't do). However in Windows being an administrator allows any program you execute to get administrator or system privileges. — Claunia 22:34, 29 December 2005 (UTC)
That’s certainly one of the ways in which Mac OS X’s security is better than that of Windows, but it has nothing to do with the Intel transition. The fear that Intel-based Macs would somehow be vulnerable to Windows viruses is widespread enough to deserve mention, but it is also based completely on misunderstanding. Since the presumed ‘problem’ is actually impossible in the first place, there neither a need for anything else to protect against it, or way in which it could do so. David Arthur 15:07, 30 December 2005 (UTC)
- Impossible? Why? Heck, it isn't even impossible today. The PowerPC bc (branch conditional, using the "branch always" encoding) instruction can be made to act as a big fat nop (do nothing) instruction on x86. With an x86 Mac, you'd only need to look around in memory a bit to determine that the machine is indeed not running Windows. You could even share much of the exploit binary. You can hit other operating systems too while you're at it, as long as they all run similarly buggy apps. 184.108.40.206 06:34, 2 January 2006 (UTC)
- In other words, you’re saying that it would be theoretically possible to write a virus that affected both systems, provided that they had the same security flaws. That doesn’t mean that WIndows-only viruses would magically start affecting the Macintosh just because it has an x86 processor. David Arthur 17:41, 2 January 2006 (UTC)
Viruses & Virtual Machines
Chris83 is incorrect. Viruses running in Wine can affect the host system. It would require additional work for a virus/trojan/etc to propagate under Wine, but any malware that recursively trawls available directories and attempts to modify/delete listed files would be able to do so (modulo Wine's permissions). Frankie 15:38, 1 July 2006 (UTC)
- Please note that none of the viruses that Matt tried actually worked, even after he took extra efforts to help a couple of them. So it is still purely speculation. AlistairMcMillan 17:00, 1 July 2006 (UTC)
- What is your definition of "worked"? Most of them executed in some fashion, a few successfully wrote files (the important part), and one pegged the CPU. It is not speculation to say that Windows malware could arbitrarily alter any directories that are writeable under the virtualizer's permissions. Frankie 00:36, 2 July 2006 (UTC)
- Something that modifies files and directories on your own machine isn't a virus. Isn't the main point of a virus to propagate? Did any of these Windows viruses manage to do that in Wine? Could any of the machines even be said to be infected after the viruses were run? I'm not saying it isn't possible that it might happen in the future. I'm just saying it isn't true now and we don't know whether it might be true in the future. So it is speculation. And we have rules about that. AlistairMcMillan 01:30, 2 July 2006 (UTC)
—Preceding unsigned comment added by 220.127.116.11 (talk) 23:52, 1 August 2009 (UTC)
First of all, I dearly hope Apple isn't messing with 32-bit. Even Intel supports x86-64 now, so there is no excuse. :-( Anyway, there are lots of ABIs to choose from:
- 32-bit pointers, or 64-bit pointers
- stack only, or N parameters in registers
- caller clean-up, or callee clean-up
- long double: 64-bit, 80-bit unpadded, 80-bit in 96 bits, 80-bit in 128 bits with 64-bit alignment, 80-bit in 128 bits with 128-bit alignment...
- 64-bit values with 64-bit alignment, or with 32-bit alignment
- sizeof(long)==sizeof(void*) like nearly every system, or sizeof(long)==4 like Win64
- if passing in registers, is it N total? Is it N int plus M float, etc.?
- executable stack?
- executable heap?
- position-independant executables?
- size of address space for user code?
- access to thread-local data is how?
- access to system calls is how?
- how are variable-argument functions handled?
- do K+R C functions work fine?
- is there a frame pointer?
- where in memory does executable stuff get mapped? (low 16 MB, low 4 GB, above 4 GB...)
- what about StackGuard, ProPolice, or other stack smashing protection?
18.104.22.168 06:24, 2 January 2006 (UTC)
- If you're curious about the ABI, see the Application Binary Interface section of the Universal Binary Programming Guidelines. That section says it's like the System V ABI for x86, with some changes (although it fails to list changes such as "uses Mach-O rather than ELF" :-)).
- "Access to system calls" is "through dynamically linked procedure calls to system libraries", just as is the case in the SV ABI (except for _exit(); I suspect the OS X ABI doesn't include that exception, and I'm not even sure why the heck it's in the SVR4 x86 ABI - at least when I was at Sun, the intent was that the ABI would be based entirely on procedure calls, that being why the mechanism for having an "interpreter" section for binaries, so the C startup code doesn't have to make system calls to load the run-time linker). Guy Harris 22:56, 11 March 2006 (UTC)
Boot sector viruses
Even having BIOS currently in PCs doesn't allow boot sector viruses to be effective. They work in real mode, and as soon as the operating system switches the cpu to protected or long mode, the virus code has no effect at all, as its memory is reclamed by the supervisor (the protected mode OS) and it get overwritten. If you design a virus in protected mode, it will not allow a protected mode OS to load, so it cannot spread also. PC boot sector viruses are only useful while running DOS or another real mode OS. —Claunia 13:47, 4 January 2006 (UTC)
- No, here is how: First be in real mode, and hook the BIOS calls to read from the disk. Start the normal boot loader (MacOS X, Windows 2003 Server, Linux...), letting it do it's thing. Watch as the boot loader loads in the OS. When the boot loader loads in the last bit of the OS, patch the OS image in memory. Let the OS switch to protected mode. The OS will call back to the virus, since it has been patched. As the system continues to boot, keep adding hooks. Eventually you'll have the system call table patched, etc. AlbertCahalan 08:02, 5 January 2006 (UTC)
- We don't get boot sector viruses much anymore because we don't often trade and boot from removable rewritable physical media. The floppy is mostly dead, and the BIOS probably isn't set to boot from it anyway. AlbertCahalan 08:02, 5 January 2006 (UTC)
- Most protected OSes (all but Windows 9x) clear all memory, don't call to real mode code at all and are somewhat closed source (most), making boot sector viruses unfeasible.
- They will work if made as you described AlbertCahalan, but will be limited to ONE operating system ONE revision.
- —Claunia 08:45, 5 January 2006 (UTC)
- It's often easier to deal with closed-source, because a closed-source kernel is constrained to have a stable driver ABI. Data structures in the kernel are much more predictable. Linux data structures vary with the compiler version and about a zillion different compile-time config options. Being multi-OS means that you package up two versions of the OS-specific code. It's usually easy to avoid the memory-clearing problem, since an OS typically asks firmware (BIOS,OpenFirmware,etc.) for the memory layout. The tricky thing is to find the right moment to patch into the OS. Considering a Linux example (which is somewhat impractical because of the volatile ABI), you'd first use real-mode code to patch in around/after the kernel's decompression code. When the decompression is done, you get called in protected mode. Relocate yourself to some place reasonably safe, being sure that it will be mapped in the page tables. Then you might scan for the idle loop (via disassembly) and patch yourself in there. When that patch gets called, scan for symbol table data and hijack a built-in kernel task. MacOS X is probably fairly similar. AlbertCahalan 10:13, 5 January 2006 (UTC)
Extensible Firmware Interface
Now that it is confirmed that intel macs are using Extensible_Firmware_Interface, does this rule out the possibility of running Windows XP without any kind of virtual machine program? --Windsok 13:37, 11 January 2006 (UTC)
- Confirmed where? —Claunia 13:40, 11 January 2006 (UTC)
- Thanks. —Claunia 13:54, 11 January 2006 (UTC)
- Found some information of (future?) microsoft support of EFI http://www.microsoft.com/whdc/system/platform/firmware/default.mspx , Apparently Lornghorn/Vista beta already supports EFI on x86? --Windsok 14:06, 11 January 2006 (UTC)
- Pretty much, but I don't see that it matters. (just writing your own Windows XP boot code should do the job -- I think it's HAL.SYS and NTLOADER.SYS you need to replace) Much more interesting and useful: Linux has EFI support. Normally this is only used for the Itanium systems, but using it for plain x86 should be doable. AlbertCahalan 05:38, 12 January 2006 (UTC)
- Teorically taking the NTLDR of Windows IA-64 should just work, as EFI bootloaders are bytecode and not machine code.
- Also making a mini boot loader that simulates a minimal BIOS to load the IA-32 NTLDR should be enough.
- HAL doesn't need to be modified, because as soon as NTLDR has loaded the kernel and drivers (HAL inclusive), NT doesn't use BIOS at all, but directly accesses the hardware.
- —Claunia 15:03, 12 January 2006 (UTC)
- P.S.: Currently ELILO (Efi LInux LOader) is able to load Linux for both IA-64 and IA-32. Don't know about x86-64 (is there EFI for x86-64 currently?)
- —Claunia 16:25, 12 January 2006 (UTC)
- See Talk:Apple-Intel architecture#New article. —Ævar Arnfjörð Bjarmason 16:03, 12 January 2006 (UTC)
Ok, so what about the recent Apple/Intel commercials? Maybe some comment on the controversy about them would be appropriate? Or does it belong in another article?
Advanced Computation Group
Most of the work of the Advanced Computation Group has been directed toward high-end applications of PowerPCs. What will happen to them? I raised the question in the ACG article, but do not know if there is a publicly-known answer.
Video of the announcement
Does anyone have the video of Jobs announcing the Intel transition? McDonaldsGuy 01:12, 26 April 2006 (UTC)
- Would that be the WWDC 2005 keynote? (Found with a Google for "video wwdc 2005 site:apple.com".) Guy Harris 08:56, 26 April 2006 (UTC)
Registers in Intel Core?
This article mentions lack of registers as a drawback of x86. Is this still true in Core? Exactly how many registers do Conroe/Yonah/etc have compared to Pentiums or PPCs? Frankie
- Intel Core Duo and Solo (Yonah) are IA-32 processors, with 8 count 'em 8 integer/pointer registers, the same as other 32-bit x86 processors; POWER/PowerPC has 32 integer/pointer registers (or 31 - I forget whether one of the registers is a hardwired zero). Intel Core 2 processors, such as Conroe and Merom, as well as the other upcoming Intel Core Microarchitecture processor, the Xeon Woodcrest, are EM64T processors, and, in 64-bit code, have integer/pointer 16 registers. 32-bit code on AMD64/EM64T processors can still only use the first 8 of them, however. Guy Harris 18:10, 1 June 2006 (UTC)
- Aha, I see, it's a hardcoded legacy thing. Well, that's just plain craptacular. Frankie
- And why would your typical Mac user, who sits and post pictures of his tongue piercings to Flickr and plays with Photoshop, even care about the instructions his CPU runs, or how many registers it has? 22.214.171.124 07:17, 27 August 2006 (UTC)
A simple question:
Does this mean that macs are essentially going to become PCs? Other than an x86 compatible CPU, what qualifies a computer for being a PC? The QBasicJedi 04:17, 18 June 2006 (UTC)
- Yes, in terms of hardware, all new Macs are now PCs with the exception of a couple of models still to be updated, and Apple has stated this will happen in 2006. The main difference at the moment is the start-up firmware. Apple adopted the latest Intel standard EFI, but Microsoft later decided not use it for 32-bit software, so some extra software (e.g. Boot Camp) is needed to install Windows on new Macs.--agr 17:29, 18 June 2006 (UTC)
- I am not sure but I think they are still using SATA interfaces and non-standard motherboards. I would not call them "PC"s. --rogerd 18:01, 18 June 2006 (UTC)
- What's a "standard" motherboard? One designed by, or designed and built by, one of the major motherboard manufacturers? If so, are there no PCs whose vendors design their own motherboards?
- A standard PC has a determinated memory map (that is, where firmware, devices, so on, are located). That is what makes it a PC.
- Another machine, even with same processor and devices (such as PC98) but different memory map and/or firmware is a different computer.
- What is exactly an Intel Macintosh, is a curious question, as it is not really a PC (lacks BIOS and PC memory map) but can behave as one (via the EFI compatibility interface that allows Windows XP to run).
- —Claunia 22:00, 18 June 2006 (UTC)
- Claunia, given that the majority of "PCs" (aka Windows-compatible laptops or desktops) sold today are actually PC98 or later, is it your claim that they are "not really PCs"? If so, it appears the rest of the world has left you behind and moved on to newer definitions of PC.
- From a cursory inspection, Macintels appear to meet PC2001 standards. Frankie
Merom in July
It appears that both Conroe and Merom have already started shipping. I was about to add this to the article right now, but for the sake of accuracy it can afford to wait two more days when Intel makes the official announcement. Frankie 12:43, 25 July 2006 (UTC)
Not sure about this statement...
(This also represents the first time in the history of the platform that applications dating from 1984’s original 128k Mac have been unable to run on a stock Mac.)
I don't believe that this is correct... old programs from the 128k mac often have problems running due to unsupported resolutions and colors... I can think of several games that req'd 256 colors to run properly, and a lot of more recent macs simply couldn't be put in 256 color mode... —The preceding unsigned comment was added by Tmorrisey (talk • contribs) 04:59, 12 December 2006 (UTC). I have also to complain to this statement, it is widely known that with every OS change most of the software was unable to run properly on the new incarnation. Porting something as old as stated is simply untrue. 126.96.36.199 (talk) 18:20, 27 May 2011 (UTC)
fair use / free image
I replaced fair use image with public domain one. I thought, that FU can be used, only when (citing)
Where no free equivalent is available or could be created that would adequately give the same information.
but Stormwatch changed it back with message
Changed back to the fair use image. Try again when the free image is not a SHITTY BLURRED MESS.
I thought that if there is equivalent to copyrighted image, I should use it - but that one was shitty blurred mess. Which one should wikipedia use? I simply don't know --Have a nice day. Running 13:19, 27 January 2007 (UTC)
The time line needs updated.
- Yes. Everything after the Mac Pro needs to be removed, as that was the point at which the transition was complete. That way, we don't end up uselessly listing every new Mac or update to existing Mac model (that belongs in the List of Macintosh models grouped by CPU type article). Guy Harris 18:44, 26 July 2007 (UTC)
Open Firmware vs. EFI
Why did Apple switch to EFI rather than using Open Firmware on x86? All the article says on the topic that EFI is better than the traditional PC BIOS, which seems pretty irrelevant. When I asked around, some people speculated licensing issues. But since Apple was already using Open Firmware, that doesn't seem like the problem. The most likely explanation seems to be simply that Intel was pushing it, and Apple is sleeping with Intel... Aij (talk) 00:54, 15 February 2008 (UTC)
I imagine most of this was written in 2005-6 in response to contemporary press reports. It really should be rewritten from the ground-up to provide a more historical perspective. I removed the "Viruses" section because it was pretty much entirely an out-dated editorial. 188.8.131.52 (talk) 06:09, 20 August 2008 (UTC)
- Agreed. Wish I had more time. I did clean up the Reasons section a bit, due to the slight holy-war tilt in the edits there over AMD (trimmed both the uncited "industry standard" claim and the uncited "well-known fact that AMD chips are power-hungry" claim, but that para still needs cites). HelpnWP (talk) 02:31, 8 June 2009 (UTC)
so why exactly did they switch?
do we have to wait 20 years for the official history to be released?
Apple is the only computer company to have successfully completed such a transition, as every other manufacturer who has tried - Commodore, Atari, Acorn Computers, Digital Equipment Corporation, SGI, Kaypro - have all failed.
I believe this is wrong, for example DEC had a transition from various PDP models to VAX to being part of Compaq. SGI also changed the architecture of its workstations multiple times (RISC -> Itanium -> Xeon ).
- Sure that's wrong but you have to understand that for the average iPhone-junkie, computer history starts with Steve Jobs and the knowledge horizon ends at the outlines of the iPad —Preceding unsigned comment added by 184.108.40.206 (talk) 06:03, 15 December 2010 (UTC)
"DEC had a transition from various PDP models to VAX to being part of Compaq." You left out Alpha. That's failing, not succeeding. And likewise with SGI, they lost everything after they left MIPS. — Preceding unsigned comment added by Pantergraph (talk • contribs) 15:22, 15 December 2010 (UTC)
This is an encyclopedia article, not an article for misconceptions of "the average iPhone-junkie". DEC and SGI did not fail because of CPU transitions, the PDP->VAX transition was inarguably successful; the VAX->Alpha was fine, DEC ran into problems for other reasons, not due to the VAX->Alpha transition. SGI was already having problems when they went MIPS->Intel, and this transition did help them. After being purchased by Rackable Systems (and named back to SGI), they are now actually profitable, and still selling some of the very systems that were transitioned from MIPS to Intel. The IBM AS/400 transitioned from a 48-bit custom CISC CPU to the 64-bit PowerPC with no customer impact (they'd back up programs from the old system, restore them to the new system and go.) Both Alpha->Itanium and PA-RISC->Itanium were IMHO commercial mistakes, but in technical terms were also nice and seemless. Please, just take this one sentence out, it's false, and smacks of the tendency for Apple fans to exaggerate Apple's achievements. 220.127.116.11 (talk) 22:34, 27 February 2011 (UTC)
I edited the statement; none of the above are personal computer companies. The statement is true and was commented on widely at the time of the 68k->PPC transition. I don't own an iPhone.Compilation finished successfully (talk) 20:29, 30 March 2011 (UTC)
AMD's lack of low power CPUs?
I recall AMD having a CPU in that produced less heat at full speed than the Intel CPU Apple chose did at idle. That CPU was in production when the first Intel Mac developer systems were released, soAppl could have gone with AMD. Bizzybody (talk) 18:23, 8 April 2013 (UTC)