Talk:Tegra

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing / Hardware (Rated Start-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
Taskforce icon
This article is supported by Computer hardware task force.
 

Microarchitectures[edit]

As can be seen here: {{Nvidia}} there are article for the microarchitectures, e.g. Kepler (microarchitecture), would be nice to say which Tegra uses which one. User:ScotXWt@lk 22:45, 24 March 2014 (UTC)

Heterogeneous System Architecture and Zero-copy[edit]

Which of the Tegra SoCs support which Heterogeneous System Architecture-features , especially zero-copy?

User:ScotXWt@lk 00:11, 2 May 2014 (UTC)

APX 2500 Playback[edit]

according to the nVidia Tegra FAQ the APX 2500 can only decode up to 720p content, NOT 1080p! --Captain828 (talk) 11:04, 25 September 2008 (UTC)

battery life.... for a system which is independent of a battery?[edit]

Why is there a battery life rating on this when the battery will be decided upon by the MID/PDA/Cell's manufacturer? This is like saying my car's left-front wheel will get 75 MPG. —Preceding unsigned comment added by 76.10.137.26 (talk) 05:38, 17 February 2009 (UTC) most batteries are 1500 mAH Li ion so i assume thats what it means —Preceding unsigned comment added by 66.183.36.37 (talk) 08:32, 30 December 2010 (UTC) because 1500mAH Li-ion batteries are standard silly —Preceding unsigned comment added by 66.183.36.37 (talk) 00:55, 21 January 2011 (UTC)

Battery life depends on more than just processor. Showing how much power these processors draws would be useful... --50.22.149.44 (talk) 16:23, 25 March 2011 (UTC)

Zune HD[edit]

The Zune HD is a notable product that uses the Nvidia Tegra chip. Anybody want to add that? —Preceding unsigned comment added by Kevin chen2003 (talkcontribs) 21:22, 13 August 2009 (UTC)

Zune HD having 8 independent tegra apx processors[edit]

The zune HD's article says that it has 8 independent processors, but this isn't discussed much on this page, ie, the number of processors involved in tegra. Does anyone know something about this? Dplesher (talk) 15:45, 26 August 2009 (UTC)

I've read through all of the references that were given, as well as the page on the Tegra processor and it's references, and I cannot find even one mention of 8 processors in the Zune, or 8 cores on the processor. I feel safe in removing this claim completely from the Zune HD page until someone finds a proper reference. Zippanova 05:14, 3 September 2009 (UTC)

The entire concept of the Tegra chip is the fact that it has 8 processors wow. For god sakes type tegra 8 processors in google. http://www.youtube.com/watch?v=qWqlKBp9qQ0 or maybe watch the video about the tegra chip. This article should really contain some info about the function of the 8 processors. —Preceding unsigned comment added by Shump Head (talkcontribs) 18:22, 6 January 2010 (UTC)

Products that use Tegra[edit]

Are there any products (besides Zune HD) that are rumoured/or in production to use Nvidia Tegra? —Preceding unsigned comment added by 151.59.123.139 (talk) 17:34, 2 October 2009 (UTC)

It is rumored now that the next gen Nintendo handheld system will be using this processor http://www.brightsideofnews.com/news/2009/10/13/nvidia-tegra-wins-contract-for-next-gen-nintendo-ds.aspx Weeman com (talk) 20:42, 13 October 2009 (UTC)


This page claims the Model S uses the Tegra 3 T30 ARM processor. No citation or support. How is that known? Sjledet (talk) 14:59, 16 January 2013 (UTC)

Boxee Box uses sodaville[edit]

The Boxee Box needs to be removed from the devices list http://www.engadget.com/2010/09/13/boxee-box-ditches-nvidias-tegra-2-for-intel-ce4100-pre-orders/ —Preceding unsigned comment added by Follower46 (talkcontribs) 16:59, 13 September 2010 (UTC)

kal el and later tegras[edit]

I just changed the section regards kal el and later tegra devices. News sources such as the ref in the wiki article, and the nvidia blog where it was originally released (presumably), both clearly distinguish in their wording between the gpu statements and the general performance statements. it is not the gpu that is faster, but the entire tegra chip. Also, although the graphic appears to show 75 times performance in the last tegra generation, it has widely been discussed as offering about 100 times the performance. —Preceding unsigned comment added by 72.83.254.147 (talk) 16:07, 27 April 2011 (UTC)

  • "Entire Tegra chip" includes GPU - so if the GPU is 100 times faster, Nvidia can claim that the chip is 100 times faster, even though the CPU is maybe 2-3 times faster. Wikipedia however is not the same as marketing materials, we must know the difference.--Dmitry (talkcontibs ) 08:45, 4 May 2011 (UTC)
it would be pretty cool but, the graphics won't be anywhere near 100 times faster by stark, nor 5 times faster in kal el. (ps 100x faster is like a mid-range gpu today .. running at ~200w as opposed to 0.25w in a tegra) these numbers come from a presentation where the entire chip line's performance was projected. we actually have numbers from nvidia for kal el's graphics (unlike the cpu cores at this point).
The way Nvidia came up with these descriptions appears to be by adding the gains in both gpu and cpu. in kal el, we can see specifically the number comes from 2x cpu and 3x graphics. The GFLOPs numbers Nvidia posted for the gpu rose from Tegra 2's 4.2/4.8 (there were multiple versions) to 15, and therefore the gpu can be said to be 3x faster. That leaves two times faster for the processor, which makes sense since it is going from a dual core to a quad core. But we cannot continue to assume that all the future numbers will continue to be the same 3/2 split, as we have no numbers. ..long story short, I put cpu + gpu forms of those numbers back into the article. --— robbie page talk 18:28, 7 June 2011 (UTC)

I noticed the Tegra 2 list isn't complete; for example, the Toshiba Thrive uses an Antares Tegra 2 processor, yet it isn't on the list. Also, there are others not mentioned, such as Scorpio and Kaen. 216.201.66.239 (talk) 20:13, 13 December 2012 (UTC)

switch to A15[edit]

There is source (in the article) from Nvidia that it will switch to 28nm after Tegra 3. This is likely the move to the cortex a15. The simplest reason is that it is intuitively unlikely that Nvidia would want to wait until late 2013/early 2014, a full year after everyone else comes to market with their A15 designs. Nvidia has usually come later to market on the ARM designs, but that much later would be change, their competition (ie, TI) would be releasing their second or even third generation A15s at the same time that Nvidia would be releasing their first generation A15s. The other line of reasoning is as I added in the article:

Note that 28nm strongly implies the Cortex A15, as otherwise Nvidia would have to completely design their own chip based on the Cortex A9 supporting the ARMv7 instruction set, much as Qualcomm does.

    --- How does a shift to 28nm implies Cortex A15? How does it relate to NVIDIA designing their own chip (should be "CPU" instead) if they move to 28nm?
    I think answer to all you queries is the project "grey" which "still" has Cortex A9 and is in 28nm technology node.

This is something definitely newsworthy as it would represent a fundamental change in the direction of the company like their "Project Denver" plans, and yet is completely absent in their presentations. Further, the gains from switching from an A9 design to an A15 design is said to be about 40% clock for clock, though the clock rate can also increase from Tegra 3's 1.5Ghz to a theoretical 2.5Ghz (~166% increase). Thats about a ~2x improvement over Tegra3. Assuming the graphics cores remain the same, it's 140% * 166% =~ 233%, but they probably won't max out the clock rate. If it was a 2Ghz clock, that would leave room for mild improvement in the GPU. Although I guess it is almost unlikely that Nvidia would maintain more or less the same GPU during a die shrink, shrinking the area of the chip and the power requirements instead of making use of the extra space. Anyway, it's a lot of speculation but it's hard to imagine them going to 28nm and not switching to the A15 design, as that definitely would be abandoning the ARM design and just sticking to the instruction set. --— robbie page talk 13:24, 16 June 2011 (UTC)

File:Nvidia-Tegra-T2-embedded-Colibri-computer-module-by-Toradex.jpg Nominated for Deletion[edit]

Image-x-generic.svg An image used in this article, File:Nvidia-Tegra-T2-embedded-Colibri-computer-module-by-Toradex.jpg, has been nominated for deletion at Wikimedia Commons in the following category: Deletion requests December 2011
What should I do?

Don't panic; a discussion will now take place over on Commons about whether to remove the file. This gives you an opportunity to contest the deletion, although please review Commons guidelines before doing so.

  • If the image is non-free then you may need to upload it to Wikipedia (Commons does not allow fair use)
  • If the image isn't freely licensed and there is no fair use rationale then it cannot be uploaded or used.

This notification is provided by a Bot --CommonsNotificationBot (talk) 07:55, 13 December 2011 (UTC)


Tegra 4[edit]

Seems to be a long advert. While most of the content could be classed as useful, it seems to be written straight from the Nvidia marketing documentation. Suggest removing phrase like "Tegra 4 offers exceptional graphics processing, with lightning-fast web browsing, stunning visuals and new camera capabilities through computational photography." which seem to be based on opinion. — Preceding unsigned comment added by 213.146.139.62 (talk) 15:23, 19 February 2013 (UTC)

Android[edit]

Several Android devices, most notably the first-generation Nexus 7, happen to be Tegra-powered. Since Android is based on a full Linux kernel, and Linux hardware drivers must run in kernel space regardless of what software (whether it's Android, Mir, X11, Wayland, or what not) is running in the userland, why is the Linux support section even there? It reads like it might be years out of date, so it should be updated. Thanks. 2602:306:BCA6:8300:C5A3:F15D:8214:E9BE (talk) 02:55, 4 November 2013 (UTC)

You are probably right, there are even Replicant versions for Tegra-based phones and tablets. (As far as I know though, Android puts some things that would normally be in kernel space into user land, presumably to avoid linking against the GPL-licenced kernel and releasing those things under a GPL-compatible licence. I don't know that applies here.) Perhaps you can investigate and update the section with the sources you find. --SpecMade (talk) 19:38, 4 November 2013 (UTC)
I take back the statement about Replicant. (I confused the older Galaxy Tab, which had a Tegra 2, with the Galaxy Tab 2, which is supported by Replicant.) --SpecMade (talk) 22:06, 4 November 2013 (UTC)

Tegra K1[edit]

Can the Tegra K1 please have the Project Denver removed until it is posted on the Official Tegra page. Even if its not removed it should still have a separate line for the Project Denver to the A15 as they both have very different properties. Reference 117.120.16.134 (talk) 05:15, 31 January 2014 (UTC)

PureVideo (Video acceleration SIP block)[edit]

Do Tegra die's contain Nvidia PureVideo SIP blocks? User:ScotXWt@lk 18:46, 24 June 2014 (UTC)

Tegra number codes[edit]

This page on the web provides the following statement of Qingqing Deng (Senior System Software Engineer ​ at Mobile Browser Team, Nvidia San Francisco Bay) for the time scale 2012 to now (2016):

  • The products include Nvidia Tegra platforms (T114, T124, T132, T210, T186) for phones, tablets, Shields, and Chromebooks.

At least the "T186" is currently not at all listed here on Wikipedia. If assumed that the order is the same then T186 would fit in the blank field that is currently set aside to the preliminary section of Tegra P1 - but really entering it there only based on this data would still be sort of guesswork, so we should better abstain from entering it unconditionally for now. (Would setting a question mark after such an entry be sufficient?) --Alexander.stohr (talk) 15:43, 25 October 2016 (UTC)

The build system of the android kernel for shield hosted at GitHub got a change mentioning T186 as well in the comment ('video: tegra: host: mm: Add hooks for T186') with related name sequences in a symbol ('TEGRA_18x') and a path ('t18x'):
5 ifeq ($(CONFIG_ARCH_TEGRA_18x_SOC),y) 
6 ccflags-y += -I../kernel-t18x/drivers/video/tegra/host/ 
7 endif 
--Alexander.stohr (talk) 15:57, 25 October 2016 (UTC)
Stephen Warren from NVIDIA does mention a very similar item ('Tegra186') in the source code change messages at ozlab.org:
From: Stephen Warren <swarren@nvidia.com>

Tegra186's MMC controller needs to be explicitly identified. Add another
compatible value for it.

Tegra186 will use an entirely different clock/reset control mechanism to
existing chips, and will use standard clock/reset APIs rather than the
existing Tegra-specific custom APIs. The driver support for that isn't
ready yet, so simply disable all clock/reset usage if compiling for
Tegra186. This must happen at compile time rather than run-time since the
custom APIs won't even be compiled in on Tegra186. In the long term, the
plan would be to convert the existing custom APIs to standard APIs and get
rid of the ifdefs completely.

The system's main eMMC will work without any clock/reset support, since
the firmware will have already initialized the controller in order to
load U-Boot. Hence the driver is useful even in this apparently crippled
state.

Signed-off-by: Stephen Warren <swarren@nvidia.com>
---
This patch is needed at compile-time for the upcoming Tegra186 core patches
to compile. As such, it's probably best if it gets applied via the Tegra tree.
--Alexander.stohr (talk) 16:12, 25 October 2016 (UTC)

Nintendo Switch[edit]

There has been a bit of back and forth about the inclusion of the Nintendo Switch under Tegra X1. Some have asserted that neither Nintendo nor Nvidia have confirmed the Tegra X1 is the SoC in the Switch. However, Manufacturer confirmation may never come (ie. PS4 Pro and Wii). I would also point out that this is not required by Wikipedia policy, which states 'Predictions, speculation, forecasts and theories stated by reliable, expert sources or recognized entities in a field may be included'. Dbsseven (talk) 21:21, 1 February 2017 (UTC)

That same paragraph also says "In forward-looking articles about unreleased products, such as films and games, take special care to avoid advertising and unverified claims." Tegra X1 counts as an unverified claim. All we know is that it's a Tegra chip. Also, it's best if we keep discussion on this centralized, so where would you like to continue it? Here or at Talk:Nintendo Switch, where you've started an RfC about this? Gestrid (talk) 04:44, 2 February 2017 (UTC)
Gestrid, I agree this is redunant. But it is also redundant with a discussion and RfC on this very topic. I have tried including the compromise language based on that RfC, but it is being reverted.Dbsseven (talk) 15:56, 2 February 2017 (UTC)

Tegra X1/Switch list[edit]

There have been multiple edits of the Tegra X1 list removing the separate listing for the NX chip in the Nintendo Switch. I would say the chip should be listed separately.

  • There is not a cited source stating the NX and the T210 are the same product, so this deletion is original research.
  • Listing both chips is in keeping with convention for microprocessor lists where differently clocked parts are listed individually (see Tegra 2 or Tegra 3 lists or AMD or Intel microprocessor lists as examples).

Thoughts? Dbsseven (talk) 18:17, 22 March 2017 (UTC)

But there is a source listed that mentions that the NX contains a T210. It's right there. TechInsights tore down the chip, imaged its silicon, and saw that it was T210. TechInsights, which merged with the former Chipworks, are well known in the field for their silicon level analyses of products. What more reliable source do you want? The previous talk page discussion predates that reference and analysis (and product release) and so is obsolete. There's a difference between selling a part specified to reach a certain clock - which is what manufacturers do with the same die, and they give them a different model number, and therefore they warrant different listings - and running a part at a given clock. We have no reference for what clocks the NX SoC is specified to run at. Nvidia only sells one Tegra X1 part, the T210. All we know is that Nintendo runs it at particular frequencies (probably for power and thermal reasons). Listing the NX SoC separately assumes it is being sold to Nintendo as a nonstandard part number, and that the leaked clock frequencies are a chip spec and not just a design choice. That is original research (and we have no evidence for it, so it's terrible research). Available, up to date, public evidence and research says it is a T210. Heck, the article as it currently stands isn't even consistent, because it lists NX-SoC as a separate chip, then goes on to list the Switch on the list of devices under T210. 2001:2C0:CC12:901:19D9:115C:A48:9BC0 (talk) 12:42, 23 March 2017 (UTC)
Considering the part numbers printed on the chips for the NX chip are different from those on other X1 products like the Pixel C makes the argument about them being the same product unlikely. The assertion that they are the same die by techinsights is fine, that does not make them the same product. See here for an even more up-to-date discussion of possible modifications despite identical dies for the Switch. Dbsseven (talk) 16:09, 23 March 2017 (UTC)
ps: And you're right about the article being confusing about the Switch's chip. That is why I started this discussion on the talk page, to sort this out.Dbsseven (talk) 16:12, 23 March 2017 (UTC)
Those two photos are comparing X1 chips manufactured far apart in time and likely on different lines, perhaps even different mask revisions. Try this comparison (the early leaked Switch teardown vs. a contemporary Nvidia Shield). Note how the top, laser-etched portion of the markings appears to be identical except for "1632A2" vs. "1629A2" (which is almost certainly a date code - week 29 vs. week 32 of 2016). The lower, silkscreened text would've been added after the chips were manufactured to differentiate their target products (i.e. likely after they blew the configuration fuses). Mind you, at this point we're both engaging in original research. Ultimately, it is true that the same die can (and almost always does, in e.g. AMD and Intel parts) yield different products, and the difference really comes down to how the manufacturer chooses to market the chip (there is no technical boundary that defines where something becomes a different product). But we lack and likely never will have any official documentation on how Nvidia is internally marketing this chip to Nintendo (as a T210 or as something else). We do not have evidence either way; given that this is an info table and the chip has to go either in the same row or a separate row, a decision has to be made. Given that recent Tegra devices have been single-SKU (since Tegra 4 at least), with no binning/differentiation by clock speed, I would argue that the more logical decision is to simply list the Switch as using the T210. If it looks like a T210, swims like a T210, and quacks like a T210, then it probably is a T210. 2001:2C0:CC12:901:19D9:115C:A48:9BC0 (talk) 17:17, 24 March 2017 (UTC)
PS. Conversely, note how the markings between the Shield TV and the Pixel C are completely differerent in the top half (manufacturing info) and partially different in the bottom half (part number/application). It could be similarly argued that those two products use different SKUs, even though we know they're both T210s. But T210 isn't printed anywhere on the chip so ultimately we can't conclude either answer from the chip numbering. 2001:2C0:CC12:901:19D9:115C:A48:9BC0 (talk) 17:24, 24 March 2017 (UTC)
I think we a making progress here. And you're right we're both inferring a lot. I am hesitant to call them the same product when a full interrogation of the chip has not been carried out to see if all the X1 features are there (not just similar die shots). And I had a pretty lengthy conversation previously about if the pre-release Switch contained final hardware, or was something to developers something to work with. But even if the NX chip is exactly the same in every feature to the T210, given that it runs at different clocks I bet against it being the same product. We do not know if Nvidia sold the chip as a unique product with defined clocks, or if Nintendo just took a generic X1 chip offered and clocked them how they like. (I would lean toward the first. Nvidia know who they're selling to, are working with Nintendo, and likely binned products to fit the agree upon specs.) In any case, by convention separately clocked chips otherwise identical are listed independently, so I think it is worth keep it the same way here. This provides an easy means of comparison. (ie. Is the Switch's CPU faster or slower than a T210?) Dbsseven (talk) 18:28, 24 March 2017 (UTC)
Different devices with the same chip will have different clock speeds; clocking embedded devices however you want within the specification maximum is standard practice (for various reasons, mainly power and heat). You can over/underclock a Raspberry Pi as you see fit. Even on desktop computers, the OS will clock CPUs up and down, and x86 computers with particular power requirements will limit the speed of a particular chip; it's trivial to configure an OS like Linux to underclock your CPU. The Pixel C reportedly runs its GPU at 850MHz. The Shield TV runs the CPU at 2 GHz, which is actually higher than what's listed here. By that standard, that means neither of those are T210s either, but something else, and deserve a separate row on the chart. If you want every implementation (and potential binning, given that the chip manufacturer always works closely with device manufacturers in the mobile industry) with different configured specs to be listed separately, then you need a row for every product using the Tegra X1, and you need to invent some kind of naming scheme, because the only official name for a Tegra X1 so far is T210. 2001:2C0:CC12:901:19D9:115C:A48:9BC0 (talk) 07:07, 30 March 2017 (UTC) according to the teardown guide posted in this topic the processor is an ODNX02-A2. it likely means that it is an nvidia tegra A2.84.212.111.156 (talk) 16:01, 6 September 2017 (UTC)

Tegra X1 big.LITTLE[edit]

The article text states, "Contrary to initial belief, Nvidia does not use the eight cores in ARM big.LITTLE configuration," but then goes on to describe a system which sounds very similar to the big.LITTLE "In-kernel switcher" mode. I guess the distinction is the switching happens in hardware rather than OS kernel, but in either case the core migration is transparent to client software, no? Would it be more accurate to call this a modified implementation of big.LITTLE? Walkersam (talk) 22:44, 6 September 2017 (UTC)

User TechSource constantly changes Tegra X2 to the fasle name P1.[edit]

Even though the sourced official nvidia PDF clearly refers to it as X2, this user constantly changes it to P1. — Preceding unsigned comment added by Eddmanx (talkcontribs) 14:01, 8 November 2017 (UTC)