Jump to content

Talk:Apple M1: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 154: Line 154:


I went ahead and did it. I believe I have transferred everything from the other article to this one. [[User:DIYeditor|—DIYeditor]] ([[User talk:DIYeditor|talk]]) 13:04, 11 June 2022 (UTC)
I went ahead and did it. I believe I have transferred everything from the other article to this one. [[User:DIYeditor|—DIYeditor]] ([[User talk:DIYeditor|talk]]) 13:04, 11 June 2022 (UTC)

== Why mention that specific security matter in the intro? ==

Seems like NPOV would refer more generally to security concerns in line with articles on other chip families. --[[User:Wslack|<span style="color:001E6E;">\/\/slack</span>]] <small>(''[[User talk:Wslack|<span style="color:001E6E;">talk</span>]]'')</small> 23:05, 18 January 2023 (UTC)

Revision as of 23:05, 18 January 2023

WikiProject iconComputing C‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.
Taskforce icon
This article is supported by Computer hardware task force (assessed as Mid-importance).
WikiProject iconApple Inc. C‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Apple Inc., a collaborative effort to improve the coverage of Apple, Mac, iOS and related topics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
CThis article has been rated as C-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.

Wiki Education Foundation-supported course assignment

This article was the subject of a Wiki Education Foundation-supported course assignment, between 4 October 2021 and 9 December 2021. Further details are available on the course page. Student editor(s): Beccabubu.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:28, 17 January 2022 (UTC)[reply]

Reconcile this with Apple's motion coprocessors

Apple now has two series of processors designated as "M" series: this one being the first of its desktop SOCs, and their motion coprocessors. Both these articles should make it clear that the other exists and has its own article.

Ben Leggiero (talk) 17:21, 11 November 2020 (UTC)[reply]

There's no Apple M1 motion coprocessor; the motion coprocessors start with M7. (In addition, starting with the M9, the motion coprocessors aren't chips of their own - they're just components of A-series chips, just as the CPU, GPU, and, in newer chips, the "neural engine" are components of the chip.)
So I don't think pages for individual M-series Mac chips need to refer to Apple motion coprocessors until there's an M7 chip. Perhaps the section of Apple-designed processors linked to by Apple M series, namely Apple-designed processors#M series, should have a hatnote saying something such as "Apple M series redirects here; for Apple's line of motion coprocessors, see Apple motion coprocessors", using Template:Redirect. Apple motion coprocessors could, similarly, have a hatnote saying "This is about Apple's M series of motion coprocessors; for the M series of systems on chips for Macs, see Apple-designed processors#M series." Guy Harris (talk) 20:22, 11 November 2020 (UTC)[reply]
I understand and agree. I just wanted to get ahead of the problem before it crops up. – Ben Leggiero (talk) 15:55, 12 November 2020 (UTC)[reply]

Additional hardware info useful

For example, cacheline size is useful for high-speed application development. Does anyone have that data yet? Thanks. --Rsjaffe (talk) 19:08, 17 November 2020 (UTC)[reply]

You're best off going to WikiChip for that sort of data, and that sort of data is best off kept there. Wikipedia is not for data science results. There are many other similar statistics that would be beneficial to developing the chip that need a home. Having said that I am removing the AMX entry in the main document for the same reason and would appreciate if someone would ratify my unlogged-in edit 86.21.8.98 (talk) 18:14, 18 May 2021 (UTC)[reply]

Is the first sentence correct?

The Apple M1 is the first ARM-based system on a chip (SoC) designed by Apple Inc. for its line of Macintosh computers.

As far as I'm aware, the T1 and T2 chips are ARM-based SoCs powering a separate iBridge device inside Macs, which runs bridgeOS. These pre-date the M1 SoC by just over 4 years. I'll place a {{Dubious}} on that line until/unless we agree on the accuracy of this statement. – Ben Leggiero (talk) 17:55, 19 November 2020 (UTC)[reply]

It's the first ARM-based SoC designed to be the CPU of a Mac. Perhaps it should be rephrased to make that clearer. Guy Harris (talk) 18:45, 19 November 2020 (UTC)[reply]
I thought of doing that, but then I remembered that the T1 and T2 are also CPUs/GPUs of Mac devices, just not the ones which run macOS. The M1 SoC is also used as the Mac's primary display's GPU, neural engine, and other such non-CPU system components, so it also can't strictly be said to be the CPU of the Mac. I think the verbiage you amended into this article is better, but still not entirely accurate. Perhaps something about it being the first SoC which runs macOS? – Ben Leggiero (talk) 20:28, 20 November 2020 (UTC)[reply]
The T1 and T2 aren't CPUs in the sense of being the central processing unit - they're a peripheral controllers (Touch Bar), security processors, and start up processors (sort of like the console processors that some IBM mainframes have).
They're GPUs in the sense of 1) drawing the Touch Bar and, I think, 2) drawing on the screen early in the boot process and possibly drawing the battery image when you plug it in, but they're not what applications and WindowServer use to draw the GUI.
The M1 SoC may be more than just a CPU, but the CPU(s) are part of it. And I think some x86 processors Apple's used have on-chip GPUs, in which case they're also more than just CPUs. Guy Harris (talk) 20:42, 20 November 2020 (UTC)[reply]
I mean, yes and no. I looked into it, and I encourage you to as well in case I'm wrong, but as far as I can tell, the TouchBar, camera, mic, etc. are all part of a separate device, which is interwoven into the Mac as much as the Intel device is. This device, the iBridge, runs bridgeOS, which is a fork of watchOS. It's an entire device, with its own central processing and all. It's not like a motion coprocessor or tensor unit or discrete GPU; it actually functions as a separate device. The iBridge device communicates with the Intel chipset via a permanent internal USB connection. It's analogous to plugging a phone into the computer; even if one can control the phone with the computer and use it as a peripheral device, we would never say that once the phone no longer has a CPU in this state. The sticky part is that iBridge is actually physically a part of the Mac in the same way that the Intel chipset is, and that the rest of the Mac cannot function without iBridge, nor can iBridge function without the rest of the Mac, since iBridge takes the responsibility of managing boot and other secure operations, and the Intel device tells the iBridge device what to display, when to activate its camera, etc. – Ben Leggiero (talk) 21:58, 20 November 2020 (UTC)[reply]
Yes, many computer systems have more than one processor in them, but only some of the processors are generally considered "central" processors.
The KL10 had a PDP-11/40 as a front end processor; that front-end processor was responsible for starting up the main processor (that might include loading CPU microcode, as well as booting the CPU).
Various IBM mainframes also had console processors; I think some were PowerPC microprocessors, and they may have run either IBM's internal Workplace OS's OS/2 personality or Linux or both.
And I think the System Management Controller (SMC)/Power Management Unit (PMU)/System Management Unit (SMU) on Macs without the T-series chips included microcontrollers of some sort.
So, yes, a lot of computer systems have processors in them that, were you to remove them, would brick the system, but I don't see them being as significant as the main processor. Apple replacing whatever microcontroller is used in the SMC/PMU/SMU (which might well have included an ARM core of some sort) with a processor of their own design (even if it's powerful enough to run Darwin as its OS) is "well, that's nice, but it doesn't affect users that much; Apple replacing the x86 "application processor", if you will, with an ARM processor of their own design is a lot more significant. Guy Harris (talk) 01:59, 21 November 2020 (UTC)[reply]

Neural Engine

What is the purpose of the Neural Engine and what is the significance of the trillions of operations it can allegedly, but uncitedly, perform each second? 87.75.117.183 (talk) 04:03, 21 November 2020 (UTC)[reply]



(sorry it isn't indented - couldn't figure out how :/ ) — Preceding unsigned comment added by Synt4x 3rr0r at Line 420 (talkcontribs) 03:14, 6 December 2020 (UTC) EDIT: SineBot, I think it's obvious that I wrote that.[reply]

The Neural Engine was designed to accelerate machine learning & AI tasks. AFAIK, the chip (or part of the chip, I guess) is an ASIC (Application Specific Integrated Circuit) which means that it is very fast and efficient, though only at the task that it is meant to do (AI and machine learning.) Synt4x 3rr0r at Line 420 (talk) 03:12, 6 December 2020 (UTC)

Apparently multiply-accumulate operations (MAC operations) are popular operations in artificial neural networks, and there are several patent applications from Apple discussing neural network processors that, among other things, perform multiply-accumulate operations. See, for example, US Patent Application No. 20190340489 NEURAL NETWORK PROCESSOR FOR HANDLING DIFFERING DATATYPES, US Patent Application No. 20190340486 PERFORMING MULTIPLY AND ACCUMULATE OPERATIONS IN NEURAL NETWORK PROCESSOR, US Patent Application No. 20190340502 PROCESSING GROUP CONVOLUTION IN NEURAL NETWORK PROCESSOR, and US Patent Application No. 20190340491 SCALABLE NEURAL NETWORK PROCESSING ENGINE. (For more patents, search for patent applications with various of the inventors listed and with an assignee name of "Apple" on the US Patent and Trademark Office patent application search.)
They might be done, for example, when doing convolutions of functions (convolutions are also mentioned in those patents) - calculating the integral of the product of two functions would appear to involve multiplying the values of the functions at various points and adding them. Guy Harris (talk) 06:26, 6 December 2020 (UTC)[reply]

64 Bit?

Is the M1 chip 64 bit? 32 bit? N0w8st8s (talk) 01:16, 23 December 2020 (UTC)n0w8st8s[reply]

64-bit. (Apple doesn't do 32-bit Macs any more; Catalina doesn't even support 32-bit applications.) Guy Harris (talk) 02:42, 23 December 2020 (UTC)[reply]

Neutral point of view contra malware on M1

I agree that there are some malware for M1 Mac:s. But, reporting on it in this article goes against Wikipedia's strife to keep articles balanced, fair and neutral. Reporting on malware targeting M1 Macs, while quite true, is more of a novelty, a headline grabber, and it is not reported on any other platform, so fairness, neutrality and balance goes right out the window. Malware is not a defining point of M1 processors (yet). I think such segments should be left out until the criterion for fairness, balance and neutrality is met. -- Henriok (talk) 13:34, 24 February 2021 (UTC)[reply]

@Henriok:: Thanks for bringing this up. I am not sure how the malware info affects the "fairness, neutrality and balance" of the article, but a bit to your point, I have searched through other "chip" article, such as Opteron and so on and they all tend to keep viruses/malware/software issues off their pages. So, what ya say, should I scrap the whole malware info I added yesterday? Let me know your thoughts and I will revert it back. If anyone else will chime me that would be great too. Thanks again, Kolma8 (talk) 14:32, 24 February 2021 (UTC)[reply]
Malware tends not to be processor-dependent. It may be instruction-set-dependent, either because it's only available as machine code for a particular instruction set, because it depends on a quirk of an instruction set or on the standard calling sequence of the instruction set, or because it won't work if a particular instruction set feature is implemented by the processor on which it's running and is being used by software (NX bit, Intel MPX, the CHERI capability mechanisms, ARM pointer authentication codes, etc.); that would be one way in which it could be processor-dependent, in that some processors might not implement those protective features and similar forms of protection aren't being implemented in software.
Malware may also be dependent on the software platform on which it's running - malware for Windows on x86-64 probably won't cause a problem with macOS or Linux or FreeBSD or NetBSD or Solaris or... on x86-64.
So articles about particular processors tend not to discuss malware; it's probably usually discussed on pages for operating systems, browsers, and other software platforms.
The M1 is the first ARM64 Mac processor, so it's the first processor that won't run x86-64 malware unless it's all-too-faithfully translated by Rosetta 2 :-), and a malware binary for it won't work on an x86-64 Mac. It also implements the aforementioned ARM pointer authentication codes, which might prevent some forms of malware from running if the OS is using it, but that doesn't mean it's immune to malware.
So malware for ARM-based Macs is probably best discussed on the macOS or macOS Big Sur page. Guy Harris (talk) 20:20, 24 February 2021 (UTC)[reply]
Or perhaps on the Mac transition to Apple Silicon page. Guy Harris (talk) 20:57, 24 February 2021 (UTC)[reply]
Yeah, that occurred to me too. I think it'd fit there better, since that certainly is an aspect of the Arm tradition. As is any software support really. -- Henriok (talk) 20:42, 25 February 2021 (UTC)[reply]

A new edit

@Henriok and Guy Harris: Can you guys look at this contributiona nd let me know your thoughts? See here [[1]]. Kolma8 (talk) 21:39, 1 March 2021 (UTC)[reply]

As I noted in that section, macOS 11.2.2 has a change that fixes a problem that sounds similar to that problem, and that may also occur on Intel-processor Macs. I'm not an expert in 1) USB charging or 2) what's under the control of the software, but if, for example, the USB spec imposes some limits on how much power can be supplied, and the "non-compliant" hubs to which Apple are referring pump out too much power, and the CPU can, under OS control, somehow compensate for that, then that might be what's going on. Apple indicated that the Macs with this issue are "MacBook Pro (2019 or later) and MacBook Air (2020 or later)", which definitely isn't "M1 Macs only". Guy Harris (talk) 22:09, 1 March 2021 (UTC)[reply]
I don't want an encyclopedic article about a processor, albeit a system on a chip, to include what essentially are software issues. If it's hard to discern and unclear if an issue is in the subject matter, supporting plattform, firmware, drivers, operating systems, or offending third party products.. don't include. But if turns out to be, given time, a typical or defining feature isolated to or heralded by the subject matter, then OK, include it. -- Henriok (talk) 23:13, 1 March 2021 (UTC)[reply]
For what it's worth, one post in this MacRumors forum thread says that "lots of powerd hubs are feeding current back to the host" and that this "isn't allowed by definition", with somebody else saying that "[they've] seen USB ports on Wintels and Macs alike completely fried when using cheap USB hubs."
This story in The Register points to this review of a USB-A to USB-C cable with the title "Surjtech's 3M USB A-to-C cable completely violates the USB spec. Seriously damaged my laptop.", the laptop being a Chromebook; both the Chromebook and a USB analysis device got damaged. The reviewer says of the Chromebook:

On my Pixel, both USB Type-C ports stopped responding immediately. Neither would charge or act as a host when I plugged in a USB device such as an ethernet adapter. Upon rebooting my Pixel, the system came up in recovery mode because it could not verify the Embedded Controller on the system. No amount of software recovery could revive the EC. Upon closer analysis, serious damage has been done to components related to charging and managing the USB Type-C port's capabilities.

So it appears that it's possible for "non-compliant" equipment to do damage if you plug that equipment into your machine.
Why only some Apple models are mentioned is another matter. Perhaps there's something about the hardware making them more vulnerable (but that's not M1-only, apparently). And why this can be dealt with in software is yet another matter - perhaps there's something the software can configure the hardware to do to protect itself but that the OS wasn't doing.
But, given the indication that M1-based Macs aren't the only ones vulnerable to this sort of thing, perhaps this belongs in articles about the laptops, not about the processor in them. Guy Harris (talk) 23:43, 1 March 2021 (UTC)[reply]

Performance benchmarks vs SMT hardware

This block of text was removed in recent edits.

 The benchmarking methodology for single thread synthetic benchmarks was criticized as being flawed when comparing to simultaneous multithreading enabled x86 CPUs.[1][2]

References

  1. ^ "Exclusive: Why Apple M1 Single "Core" Comparisons Are Fundamentally Flawed (With Benchmarks)". December 2, 2020.
  2. ^ "Current x86 vs. Apple M1 Performance Measurements Are Flawed". December 7, 2020.

The reasoning given in the comments was:

  • Removed a direct rehash of an already quoted source
  • The referenced article (the only primary source) is largely opinion and intentionally bends the definition of a "single core" benchmark. Single-threaded applications do not run any faster when HT is enabled. "Statistics" shown are unscientific estimations and do not expose actual flaws in single-threaded performance testing

I am not going to reinstate the text without talk and mostly wanted to point out what I see as a potential misunderstanding of the referenced sources. The second source (extremetech) was an independent article building upon the first one and included independent benchmark scores (that the author says are marked in red) that confirm the initial "flaw". Secondly neither source claims that single-threaded applications run faster when HT is enabled. They are both pointing out however that "Single-Core" benchmarks may be misleading when comparing architectures designed with SMT in mind vs others that are designed without, because SMT based architectures don't extract full performance from a given physical CPU core when running only a single thread on them. Benchmarks are always going to be less-than scientific though and I am unsure if there is any widely accepted concensus on how to use them for harware related articles. Perhaps we should reword the claim? -- Ujwal.Xankill3r (talk) 05:14, 9 March 2021 (UTC)[reply]

Neutral point of view with regards to specs

I've been trying to tone down puffery in this page. Yes, the M1 is fast. No, it isn't a magically instant physics-defying monster of a machine.

This motivated https://en.wikipedia.org/w/index.php?title=Apple_M1&diff=prev&oldid=1031412944 as well as a recent change revert (@Invenio:).

The neutral claims speak for themselves. No need to distract from the raw data with marketing speak 🙂 Arzg (talk) 13:24, 15 July 2021 (UTC)[reply]

No-one is claiming it's physics-defying monter either, so if we're toning down the hyperbole, let's do that, and not invent things that never happened.
Regarding the large caches, it's a significant feature of this CPU. The size of the caches are pretty much unheard of in the history of CPUs. So it's not puffery, it's just plain facts. I hope AnandTech's take on this, that it's "…absolutely enormous and is 3x larger than the competing Arm designs, and 6x larger than current x86 designs…" is sufficient for this article to claim that it's "unusually large". -- Henriok (talk) 15:57, 15 July 2021 (UTC)[reply]
Ah, I see the AnandTech take now. My apologies, thanks for reverting my revert and fixing the ref! Arzg (talk) 16:16, 15 July 2021 (UTC)[reply]
Thanks @Henriok: and @Arzg:. I'm totally against retelling Apple's marketing spiel, which seemed to have inspired some to spruce up this article. I should have located a source but knew from the cache sizes of other CPUs that this was indeed unusual. invenio tc 09:15, 16 July 2021 (UTC)[reply]
All's well that ends well 🍏 Arzg (talk) 12:37, 16 July 2021 (UTC)[reply]
🥰 -- Henriok (talk)

Please add valid references to claim statements

I have seen references pointing out to the m1 reveal(Especially, graphs provided by Apple) rather than an actual real world benchmark.Please add a real world source to justify your statements. -Sakura

[dubiousdiscuss] GPU Design

Who designed the GPU in the M1 ? Many people assume it is Apple, but this is actually unverified. Are there primary sources from Apple (as opposed to Anantech or other third parties) making statements on this? — Preceding unsigned comment added by 92.169.127.128 (talkcontribs) 17:49, 2 December 2021 (UTC)[reply]

The M1 is a derivative of the A14 and the GPU in the M1 uses the same GPU architecture and cores as the A14 (see MTLGPUFamilyApple7), which in turn is an obvious evolution of the GPU design that Apple introduced with the A11. Apple has consistently talked about the GPUs in their A-series as being "Apple-designed" since the GPU architecture introduced with the A11 (see Understanding GPU Family 4). But if you want to hear someone from Apple explicitly refer to the M1 GPU as "our Apple designed GPU" you can watch Apple's Tailor your Metal apps for Apple M1 video. —RP88 (talk) 22:50, 2 December 2021 (UTC)[reply]
Thanks! 92.169.127.128 (talk) 07:26, 3 December 2021 (UTC)[reply]
I reverted your change to the "ApplDesignedGPU" ref where you changed the ref from Anandtech to Apple's Understanding GPU Family 4. While GPU Family 4 / Apple A11 is when Apple switched to an Apple designed architecture, that article doesn't mention the M1, so it is technically OR on our part to extrapolate that, and it is unecessary when reliable secondary sources can make the inference for us. Furthermore, per WP:PSTS reliable secondary sources, if available, are generally preferred to primary sources (although using two refs, the Anandtech article ref as a reliable secondary and a ref to the Tailor your Metal apps for Apple M1 video as a primary would be fine). —RP88 (talk) 21:30, 3 December 2021 (UTC)[reply]

Instruction set is ARM8v4 not ARM8v5

Executing MRS [register], S3_3_C2_C4_0 returns invalid instruction on Apple M1 devices, including macmini m1 and mbp m1. Also, wikichip [2] lists it as v4.Stormj (talk) 21:59, 12 February 2022 (UTC)[reply]

Merge discussion in progress

A merge/split discussion involving this article is in progress at Talk:Apple M1 Pro and M1 Max#Split again?. BLAIXX 16:23, 12 March 2022 (UTC)[reply]

Another, Is the first sentence correct?

Hi, It says on the first paragraph "The new chip also brought Apple's third change to the instruction set architecture used by Macintosh computers, 14 years after they were switched from PowerPC to Intel. " This should be the fourth. It went to IBM fleetingly also. Or is this considered a bump in the road? Stripy42 (talk) 10:52, 13 March 2022 (UTC)[reply]

"To IBM"? The only instruction set from IBM that Apple used was PowerPC, so it's considered "not an additional change to the instruction set architecture"; both Motorola and IBM made PowerPC chips.
(And, yes, "Intel" is a chipmaker, not an instruction set; "to Intel" is better stated as "to x86".) Guy Harris (talk) 11:22, 13 March 2022 (UTC)[reply]
Would x86 and x86_64 count as separate architectures?  (Even though many processors can run both types of code, some macOS versions could run only one or the other, so some migration was involved.)  If so, then there have been five Mac architectures: 68k, PPC, x86, x86_64, and now ARM — making this the fourth change of architecture. Gidds (talk) 16:59, 5 April 2022 (UTC)[reply]
I consider "x86" to refer to the 16-bit, 32-bit, and 64-bit versions. There's no official name for the 16-bit (8086/8088, 80186/80188, 80286) version; IA-32 is a name for the 32-bit version, and x86-64 is a name for the 64-bit version.
64-bit x86 processors natively run 32-bit x86 operating systems, and an operating system can support both 32-bit and 64-bit applications. No migration was required for users when 64-bit x86 was added, and no binary-to-binary translation or simulation support was required; application developers did not have to make their applications 64-bit. Migration was required only when Apple decided to drop support for 32-bit applications (unless you install Parallels Workstation or VMware Fusion and run the older OS in a VM - yes, that works, I have 32-bit macOS and Linux VMs on my machine running Big Sur).
The same is true of the 32-bit and 64-bit versions of PowerPC - if Apple had continued with PowerPC, and eventually dropped support for 32-bit PowerPC, the same migration of 32-bit-only applications would have been required - so I don't count adding 64-bit x86 support to be a change of architecture just as I don't count adding 64-bit PowerPC support to be a change of architecture - no binary-to-binary translation support or simulation support was needed, as it was for the transitions from 68k to PowerPC, from PowerPC to x86, and from x86 to ARM. Guy Harris (talk) 18:11, 5 April 2022 (UTC)[reply]

Ok to start work on merger?

I made an edit to change the introduction from "The Apple M1 is" to "Apple M1 is a series". Is it ok to start working on merging these articles piece by piece? I don't want to step on anyone's toes or duplicate work if they are preparing large scale changes rather than incremental. —DIYeditor (talk) 11:36, 11 June 2022 (UTC)[reply]

I went ahead and did it. I believe I have transferred everything from the other article to this one. —DIYeditor (talk) 13:04, 11 June 2022 (UTC)[reply]

Why mention that specific security matter in the intro?

Seems like NPOV would refer more generally to security concerns in line with articles on other chip families. --\/\/slack (talk) 23:05, 18 January 2023 (UTC)[reply]