Jump to content

Talk:GeForce 8 series: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 318: Line 318:


thanks. --[[User:Extremepilot|Extremepilot]] 22:03, 7 October 2007 (UTC)
thanks. --[[User:Extremepilot|Extremepilot]] 22:03, 7 October 2007 (UTC)

== GeForce 8800 GT Pictures ==

Link to forum with images can be found here:
http://www.theinquirer.net/gb/inquirer/news/2007/10/08/gorge-8800-gt-pics
if someone can please add these to the 8800gt section and educate me on how to upload these with the proper license and proof of that license, i would be very grateful - i have uploaded them as gfdl but was asked to provide proof...
thanks in advance --[[User:Extremepilot|Extremepilot]] 15:31, 8 October 2007 (UTC)

Revision as of 15:31, 8 October 2007

First Article

This is the first time I have created a new article on wikipedia. Constructive feedback will definately be needed. I read many of the rules and regulations about posting, and have made many edits to existing articles. I just need to know what else I need to make this "fit" with the GeForce series. Please submit feedback! Bourgeoisdude 18:32, 21 February 2006 (UTC)[reply]

Perhaps some readers will feel that this article is too speculative or unneeded. Personally I have no problem with the article, since the next graphics series from Nvidia will be notable and will probably be named GeForce 8. Nonetheless, we should add only official statements to the article, avoiding speculation. Shawnc 05:42, 25 February 2006 (UTC)[reply]

Thanks Shawnc. Bourgeoisdude 23:25, 27 February 2006 (UTC)[reply]
It seems to me like this article contains a lot of original research, which is against Wikipedia policy. Please cite sources for these rumors. 66.245.44.26 21:42, 17 July 2006 (UTC)[reply]

dunno about u guys but i think that directx 10 is used for generating unreal2006 graphics(shaders,light/glow effects, etc.) also i would imagine that clock speeds for the next generation of geforce would reach 1ghz. 165.21.154.111

Release Date

Are you sure the expected release date is June/July this year? Narwaffle 04:32, 21 April 2006 (UTC)Narwaffle[reply]

No. That's been the rumor though...I think you're right, it is likely that rumor is false. Will see if any "more accurate" rumors are present... Bourgeoisdude 17:52, 21 April 2006 (UTC)[reply]
I cannot find any significant rumors about the release date...unless someone or something tells me otherwise, I will edit this without the release date info as I am unable to verify its authenticity. Bourgeoisdude 17:18, 27 April 2006 (UTC)[reply]
Found rumors that seem to point that NVIDIA's newest GPU family will be released in the late Fall, so changed the article accordingly. Bourgeoisdude 23:05, 2 May 2006 (UTC)[reply]

We should put as a rule : a link to the Inquirer doesn't count as a source.

Why the recent edit of release date by anon. ip? I have made a thorough search for information, but the best info I could find was this Inquirer article. (Yes, I know. It's the Inquirer, but still, there isn't exactly an abundance of reliable sources here...) No sources I could find pointed towards anything but a 2006 launch, though. --Fat Hobbit 18:23, 3 August 2006 (UTC)[reply]

Nov 8th is the launch date! Source: http://www.geforcelan.com Looks like they are launching the next "platform" at a LAN Party.

Vandalism Watch

Why is it that this page is such a magnet for vandalism? Was it something I said? Bourgeoisdude 14:42, 18 May 2006 (UTC)[reply]

I would guess it's because of the constant ATI/Nvidia fanboy warring... --Fat Hobbit 18:23, 3 August 2006 (UTC)[reply]

Uhh...

"In order to combat power supply concerns, Nvidia has declared that G80 will be the first graphics card in the world to run entirely off of the souls of dead babies. This will make running the G80 much cheaper for the average end user."

Blasphemy. :D

Vandalism?

Is it wrong for me to be LMAO from that statement? I must give him credit at least for the humor, nonetheless this is not the place... Bourgeoisdude 19:16, 3 October 2006 (UTC)[reply]

Dual Chip vs Dual Core

I changed it to say 'dual chip' since 'dual core' video cards (like the 7950 GTX) actually use two separate chips each with one core on them. The concept of a 'dual core' video card is a misnomer, i.e. every card since the Voodoo2 has been 'dual core' in that they have multiple pixel pipelines.

Manufacturing process?

The article says that they'll be switching to 80nm some time in the future, but makes no mention of what it is now... --203.206.183.160 08:04, 26 October 2006 (UTC)[reply]

90nm - Pkaulf 11:51, 26 October 2006 (UTC)[reply]
Who posted that they'll use a 80 nm process? That may be true, but I'm suspicious, since the standard processes of today and tomorrow are 90 nm, 65 nm, and 45 nm. This is the first I've heard of an 80 nm process. Patrick Gill 01:13, 12 November 2006 (UTC)[reply]
No need to be suspicious, they will make an 80nm chip, AMD wanted to release an ATI chip using 80nm before Nvidia, but AMD are having some trouble getting it out, so Nvidia will release there 80nm 8800GTX before AMD/ATI. Here is an article from The Enquirer; http://www.theinq.com/default.aspx?article=36673

Article Expansion/Restructure

With the release of the cards coming tomorrow, the article should head in the direction of the other graphics cards articles. For example: http://en.wikipedia.org/wiki/GeForce_6_Series, http://en.wikipedia.org/wiki/GeForce_7_Series, http://en.wikipedia.org/wiki/Radeon_R520. Pictures and individual benchmarks, while exciting for us hardware enthusiasts, aren't encyclopedic. The rumors section will need to go. The official nVidia logos for the GeForce 8ks should be added. Great care should be taken to cite the card's hardware specs instead of just editing in information and numbers. These are just my thoughts, I look forward to working with everyone on the article =) Tyro 05:02, 8 November 2006 (UTC)[reply]

Agreed, let's cut the 'rumors' for now since the real thing is here. Bourgeoisdude 21:32, 8 November 2006 (UTC)[reply]

Benchmark Testing

Can you be more specific to what cards you are using and whether they are stock or overclocked and at what speeds?

RTM is here

Alright guys, I got us started on moving this from a future product to a current one, now we need to change everything else. Any help in transitioning would be great, especially as I am still trying to figure out all the ways that wiki works. The card looks really awesome by the way: http://www.nvidia.com/page/geforce_8800.html Bourgeoisdude 21:30, 8 November 2006 (UTC)[reply]

Fixed - Almost

I've done most everything to make it updated, but it's in a weird format.

The "Adressing the Rumors" section was just the old one with some minor word changes, and some question marks for unknown information (better than being unupdated and misleading).

I got rid of some old references, corrected the details table + added information to it, and changed the "Production Information" article. Viperman5686 --November 8, 7:16PM Eastern.

I did some pretty heavy chopping up and reorganizing, I think this layout will work though. A section to say what all the GF8 cards have in common, and then a section for each series (8800, 8600 when it comes out, etc). I tried to keep the language from being too technical as well, so someone reading the article won't need a huge GPU vocab dictionarty to get by. Tyro 09:46, 9 November 2006 (UTC)[reply]

Pixel Pipelines?

Correct me if i'm wrong, but im fairly certain this card has pixel pipelines. Someone took my section of the technical summary out mentioning pixel pipelines and changed the wiki to say, "Historically, graphics cards had fixed numbers of non-unified shaders or pipelines. The graphics card's rendering power could become bottlenecked waiting on one high demand shader type."

Look at the fill rate, a simple math equation can tell you the 8800GTX has 64 pipelines and the 8800GTS has 48. (Fillrate*1000)/(Core MHz), 36800/575=64;24000/500=48. Does the above quote mean they have 64 and 48 including the vertex pipelines, effectively reducing the real world fill rate?

Viperman5686 7:03AM Eastern November 9th, 2006.

I may have removed this while I was trying to reorganize the article, sorry. I used [1] and several pages from [2]as my basis to write about the new shaders. As I understand it, the old style pipelines have been done away with in favor of the new "stream processors" Tyro 20:40, 9 November 2006 (UTC)[reply]
You definitely can't refer to pixel pipelines anymore. That was becoming inaccurate with cards from the last couple years too. G80 doesn't have anything like the pixel pipelines of older cards. --Swaaye 21:09, 9 November 2006 (UTC)[reply]
So basically it has "Stream Pipelines" that do vertex, pixel, physics, and geometry? Or can the term pipeline not be used at all? --Viperman5686 Thursday, 2006-11-09 T 21:37 UTC
Nevermind, I get it now. Pretty weird that nVidia didn't even tell us if they would use a unified architecture or not, it was hard to understand at first. It looks like we'll have to come up with a new way to easily define raw power of same brand cards, because before it was just MHz*Pipelines --Viperman5686 Thursday, 2006-11-09 21:48 UTC
I'm not entirely sure myself yet. Trying to digest all of this stuff takes some time. But yea, the stream processors do most of the work and then the ROP area outputs pixels. Or something close to that lol. Read the insanely in-depth Beyond3D article if you'd like to take a gander at understanding it (yikes).
Some programs (like Everest Engineer Ed.) tell me that my 8800GTS has 20 pixel piplines with 1 TMU per pipeline. Not sure if that is correct. The GPU is running at 575 MHz, and fillrate is reported at 11500 Mpixels/s.
Pipelines haven't been a good measure for a LONG time. Consider that the X800XT PE massively outgunned the 6800 Ultra in fillrate yet it wasn't much ahead it in most cases. Pixel pipelines date back to the Voodoo1 (a 1x1 design). Things have changed so much since then that they became almost unrelated to overall performance. Another example, how the 16-pipe R580 defeats the 24-pipe G71 because it has more pixel shader resources, and pixel shader #s were decoupled from the pixel pipelines in even the previous generation. Then there's also memory efficiency, ROP numbers, etc.--Swaaye 21:52, 9 November 2006 (UTC)[reply]
Yeah, when you compare ATi cards vs. nVidia cards, then that's exactly what you get. What I meant was: when you want to look at the 7900 GT vs the 7600 GT, the number of pipelines and clock speed can give a good idea of the performance difference. --Viperman5686 Thursday, 2006-11-09 21:48 UTC
Yeah that's true I suppose. In each generation at least. With GeForce 8 it'll probably come down to how many sets of stream processors there are. 8 in 8800, etc.--Swaaye 01:34, 10 November 2006 (UTC)[reply]
Surely that won't be the case either. Obviously it will make a big difference but it would seem to me stream processor clock speed (which will affect how much each processor can do/second) is probably going to have a fair effect as will memory bandwidth (there will surely still be instances when the card is memory limited) and also core clock speed (I'm assuming this could be a limiting factor depending on the scene too) Nil Einne 07:31, 10 November 2006 (UTC)[reply]
The number of stream processor groups will definitely drop with lower end models. Why? Because it will create smaller GPUs. A mid-range or low-end card can't have a 670-ish million transistor chip. Clock speed will change too, but I don't know how much. Previous generations had mid-range boards running similar clock speeds as high-end models, but without the same amount of actual computational resources they were obviously slower. --Swaaye 18:02, 10 November 2006 (UTC)[reply]


Table

They both Say nVidia GeForce 8800 GTX, the 1 with the lowers specs should say nVidida GeForce 8800GTS

High end & mid range

Does anyone really think the 8800 GTS can be called mid-range? I would argue the 8800 GTS is the high end and the 8800 GTX is the ultra high end. The midrange and entry level are unannouced although as this article speculates, they'll probably be the 8300 and 8600. Nil Einne 04:49, 10 November 2006 (UTC)[reply]

I've been bold and changed it back to how it was. An anon is the one who changed it here. This is also more consistent with this article and our Geforce 7 and I guess 6 articles BTW... Nil Einne 04:54, 10 November 2006 (UTC)[reply]

8 groups of 16 stream processors

I don't have a great understanding of processor design so forgive me if this is stupid but the article is not 100% clear if there is any restriction within a group. What I mean is are all 128 stream processors completely independent from each other? From the article and my general knowledge I'm guessing they are all more or less independent. Obviously you could only disable a whole group (for GTS etc) but other then that I'm assuming they can each act independently. However perhaps there are limitations e.g. on bandwidth etc Nil Einne 07:24, 10 November 2006 (UTC)[reply]

Check out the reference link. Or just check out either Tech Report's review or Beyond3D's architectural analysis. Both have more data than needs to be in this article. --Swaaye 18:04, 10 November 2006 (UTC)[reply]
I see it like like 8 processors with 16 cores. They each have their own cache. The 8800 GTS has 6 processors with 16 cores. They can all work together. --Viperman5686
Does the 8800 GTS really have 6 processors with 16 cores (or whatever you want to call it) or 8 processors with 16 cores, 2 of which have been disabled and may or may not actually be working fine? I would assume it's the later Nil Einne 12:44, 11 November 2006 (UTC)[reply]
It is the same chip being built extracted of the same wafers as the gtx variant, except that indeed some part of it may be malfunctioning or not working at the target rate but the design allows them to be shut down, making a fully functional, slightly less powerful chip. We may see more variants in the future as the possibilities are almost endless (number of memory partitions, number of alus, clocks etc.).

The description saying that the card is composed of "8 groups of 16 stream processors" does not match with the programming information presented in the CUDA documentation: http://developer.download.nvidia.com/compute/cuda/0_8/NVIDIA_CUDA_Programming_Guide_0.8.pdf See page 49 for a description of the 8800 hardware. Using NVIDIA's terminology, there are 16 "multiprocessors" each composed of 8 "processors". It should be noted that the "processors" are not independent, and are more like floating point units rather than independent execution cores. All 8 of them in a group have to execute the same instruction at once. At the multiprocessor level, they can act more independently. So the current description should be changed to "16 groups of 8 stream processors" for the GTX and "12 groups of 8 stream processors" for the GTS. Stan Seibert 22:46, 17 March 2007 (UTC)[reply]

AGP

will this card support agp

Not natively, but in theory at least, it is perhaps conceivable that there could be one of those PCI-e to AGP bridge chips used to allow it to work on an AGP slot, but I'm going to say at this point it is highly unlikely. The new-gen video cards really need newer cpu archetectures to perform optimally anyways, so providing a new-gen graphics card to an old-gen PC may be an unwise decision. Heck, I'd be willing to bet that an old PCI version of a GeForce 8 series card (8300LE??) may be released before an AGP version would. So far, no official announcements have been made by nvidia or any of its manufacturers to support AGP or PCI versions, but we can't yet rule it out. Bourgeoisdude 22:05, 29 November 2006 (UTC)[reply]

Power consumption

GeForce 8 has very high power consumption!

I think you should more appropriately say GeForce 8800 has high power consumption. I have no doubt that the mid/low-end models will be much more frugal.--Swaaye 20:27, 18 November 2006 (UTC)[reply]
It says the 8600GTS draws 71W, but tests on the net, (Xbitlabs which is a believable source) show it to draw a modest 47W. Which sounds more plausible as there are passively cooled 8600GTS cards where the heatsink isn't overdimensioned. 85.19.140.9 01:39, 20 August 2007 (UTC)[reply]

SLI

It's been about 6 years since I've built a gaming desktop PC, thinking about doing it again soon, and I know absolutely nothing about SLI except that having two 8800GTX's sounds good. The Wikipedia SLI article mentions that on high end cards, the advantages of SLI can be diminished, does anyone know how this applies to the 8800GTX? Does anyone know what its SLI performance is like? --GothMoggie 15:21, 23 November 2006 (UTC)[reply]

This page is for helping to improve the article. Your question would be better answered on a tech forum such as arstechnica or tomshardware or anandtech or something. Please see WP:NOT. Thanks. --Yamla 15:37, 23 November 2006 (UTC)[reply]


SLI mode on two 8800GTX's uses 75% of the total processing capability of two cards. Its like having the power of one card and half the power of the other card. However, one 8800GTX/GTS outperforms two 7950GT's in SLI. —The preceding unsigned comment was added by 167.206.216.189 (talk) 20:16, 12 January 2007 (UTC).[reply]

I'd like to see the specs on two 8600 Ultras SLI'd together... the total cost would be LESS than a single 8800GTS! I can't wait for March... --Dante Alighieri | Talk 22:43, 25 January 2007 (UTC)[reply]

We should talk about architecture improvements

I think that someone (i will if you agree with me i this) should write about the new features in way to show the improvements on the geforce 8, that is completely different from geforce 7 architecture. There are many, like the filters, geometry shader and cuda processing. Why bother making a page that shows the card clocks, you have to show to people why this card is good. Even because the most change was in the architecture. In Nvidia page there is a pdf with 55 pages that shows all the improvements, we only need to write them here. Thanks!!--Darktorres 18:45, 1 December 2006 (UTC)[reply]

I agree with you--or at least I agree with what I believe you to be saying. I think it should follow the format/layout of previous wiki articles, but we should have extra content because it is a major architecture change. Let's model the page to be similar in layout to the GeForce 7 article, and have content extras like those in the GeForce 256 article (since it was the last "major architectural change"). Last edited by Bourgeoisdude 20:58, 1 December 2006 (UTC)[reply]
Ok, i saw the Geforce 256 article and i will make on the same layout. I pretend to make a `changes to architecture` and show what has changed. Something that could change is the fact that others articles (Geforce 7, for example) don't talk about architecture, only numbers and numbers. Numbers aren't the only important thing, it's like thinking that only the Gigahertz of your processor counts. But on my reference there is the difference between arct. 7 and 8. And i try to make it as simple as possible! --Darktorres 00:43, 3 December 2006 (UTC)[reply]
Problem is that I doubt any of us have a grasp on the hardware. I know I don't. Beyond3D has the best architectural look bar none. Those guys are graphics programmers and have chatted with the engineers. Trying to duplicate that seems a bit like reinventing the wheel, especially when we don't really know for sure that we're right. I definitely think what I threw together for the 8800 here is enough. I have considered making that info more generic too. We sure don't need that kind of dense coverage on every eventual model, and all of the models will undoubtedly have the same technology at work. --Swaaye 23:20, 5 December 2006 (UTC)[reply]
I agree completely. The 7 Series wiki didn't define pixel chaders, vertex shaders, ROPs, etc, and neither should this. Viperman5686 01:30, 9 December 2006 (UTC)[reply]
Wikipedia:Make_technical_articles_accessible. This article is already too technical, the average reader would have a very hard time understanding it. Tyro 10:47, 9 December 2006 (UTC)[reply]
Hmm, ok. But some info like CUDA tech is missing, it's just like PureVideo, it's something new that could end with everyone using it (or not). I will add these. I didn't wanted to add Shaders, ROPS, but the fact that it can do HDR+AA should be mentioned.Darktorres 21:34, 12 December 2006 (UTC)[reply]

I added some of the modification i had made on word but it's needing some formatting, i still learning how this wiki works...Darktorres 21:37, 12 December 2006 (UTC)[reply]

Bottlenecks

Is 8800's performance less dependant on CPU and memory speed that older cards?

Due to the 8800's massively powerful GPU, it is quite easy for the CPU to hold it up. Cooldude7273 03:45, 20 January 2007 (UTC)[reply]

Should we make subheadings for 8800GTS and 8800GTX, like previous geforce wiki formats?

Well, the question says it all. So should they be in their own separate sections? (e.g., Geforce 7800 GT and 7800 GTX are in separate sections, etc.) Bourgeoisdude 16:04, 21 December 2006 (UTC)[reply]

The differences between the 8800GTX and GTS is GPU clock speed and Memory clock speed and Stream Processor. There is not really enought difference to warrent there own sections. The sections would repeat 90% of the same information. —The preceding unsigned comment was added by 167.206.216.189 (talk) 20:20, 12 January 2007 (UTC).[reply]
You can make subheadings eventually...as more 8 series cards are released. Right now, the list isn't just long enough.

better than xenos and RSX

is the geforce 8 series better than xbox 360s and ps3s gpu? —The preceding unsigned comment was added by Falcon866 (talkcontribs) 01:47, 23 December 2006 (UTC).[reply]

Much better in basically every way. Also vastly more expensive. --Swaaye 07:18, 23 December 2006 (UTC)[reply]

8900?

Is there any word on when the 8900 will be released and what specs we will be looking at with this to be added to the page? Mattm591 21:04, 26 December 2006 (UTC)[reply]

Nope--so far, no official word from anyone about the 8900 series at this time...Bourgeoisdude 01:04, 13 January 2007 (UTC)[reply]

Upcoming Products

Under the "Rumors" section, I added brief information about the release of the middle-end and lower-end cards. The article I got my information can be found here.

8600 specs, but "official" enough?

http://www.theinquirer.net/default.aspx?article=37203

that page lists the specifications of the 8600GT and 8600Ultra. but is it a good enough source to be used as a source here? Pik d 09:00, 3 February 2007 (UTC)[reply]

Th Inquirer I would not call a good source of anything, and I would take what they say with a grain of salt. Candle 86 15:35, 20 February 2007 (UTC)[reply]

Sounds like you're confusing The Inquirer with The Enquirer. 69.85.180.21 19:28, 26 February 2007 (UTC)[reply]

no im not, faud is always wrong Candle 86 04:10, 6 March 2007 (UTC)[reply]

Longest card ever?

It seems unlikely that this is the longest consumer graphics card ever, since a full length PCI card is a bit over 12" long, and I'm pretty darn sure that there existed some full length PCI graphics cards which would have qualified as "consumer". --Dyfrgi 06:45, 8 February 2007 (UTC)[reply]

No, maybe full length ISA cards, but I have yet to see any full size PCI graphics cards at all. The biggest PCI card I ever saw was a Banshee and it wasnt that long, about the same size as a Geforce 5900 Candle 86 15:35, 20 February 2007 (UTC)[reply]

Wasn't there a dual gpu Voodoo (or some thing from around that time erra) that was full PCI? --71.124.171.221 18:33, 28 August 2007 (UTC)[reply]

PCI-e/8600

It's been said that the 8800 cards won't run with just 4 or 8 PCI-e lanes (eg [3]. Any news on whether or not the same will be true for 8600 cards? 69.85.163.51 04:05, 26 February 2007 (UTC)[reply]

Theoretical peak performance

I modified the theoretical peak performance from 520 to 345 GFLOPs. The REAL specs, released by NVidia, can be found at NVidia CUDA site, in the CUDA programming guide. I also linked to a forum where an internal NVidia developer explains how the FLOPs are estimated. The previous link was obviously wrong, the SP cannot perform 3 FLOPs per clock but 2 (one MAD or one MUL and an ADD, not one MAD and an ADD as the old link suggests).

Something

How about not using symbols like tide (~) to represent terms like "about (amount)"... They look awful in encyclopedia... --202.71.240.18 09:26, 8 April 2007 (UTC)[reply]

It's not official until it's official

Since NVIDIA has not made any official announcements as to the 8600 series avaiability other than the generalised statements like "Spring of 2007", the April 17th date is not official, is it? I just hate having things stated as fact before we know they are fact, and the chart just proclaims the release of the 8600 series the 17th but we do not KNOW that. Put it in the article and say "such and such source" claims the release will be April 17th or whatever, but in the chart it is as if NVIDIA has it set in stone and can be misleading. Bourgeoisdude 15:09, 10 April 2007 (UTC)[reply]

Nevermind, read the Dailytech article again, although nvidia.com does not say it is happening, nvidia has indeed announced the new cards. I guess I'm just a little too over-anxious to ensure it only proclaims future cards if they're official, but they are official this time... Bourgeoisdude 15:19, 10 April 2007 (UTC)[reply]

Is 8500 really "entry-level"?/ 8300 release date

According to the infobox at the top of the article, the 8500 GT is listed as an "entry-level"/low-end GPU. I don't think that's quite accurate. Although the 8500 is the weakest card of the current bunch (as of April 17th), is surely won't be in a couple of months. I think we should leave this entry labeled "TBA", and following the style of the other Geforce series articles (Geforce FX,6,7,etc.), by removing the "GT", "GTS", etc. to prevent eventual overcrowding. And we probably shouldn't put in the 8300 until it's more official. So, I think it should look something like this:----------------------------------------------------------------------------> Template:NvidiaGPU

Finally, correct me if I'm wrong, but I don't think any of the sources after the 1st bullet point in the "Future Development" section say that the any other card other than the 8500 GT, 8600 GT, and 8600 GTS will ship on April 17th. So I think the bullet point should be reworded to indicate that the other cards mentioned (the 8600 Ultra and the 8300 series) will ship soon after the 17th, but not on the 17th (however, it is important to note they will ship, as there are entries in the .inf's of newer drivers and some websites that mark their existance).-

67.167.93.51 22:27, 10 April 2007 (UTC)[reply]


I believe Nvidia claimed the 8500 as entry level. It has the same price as entry-levelGPUs in the 7 and 6 series, anyway. (Just like the 6200 when I bought it a year ago). But if you insist, you can leave it that way because 8500 is a too high number indeed and there will be -someday- 8300\8400 in the future.

No 8800gs

This website says that NVIDIA will NOT release an 8800 gs that it was a typo in the 158 driver release notes. [4] 199.8.170.40 15:09, 26 April 2007 (UTC)[reply]

I've removed the reference to it. --Xyzzyplugh 06:16, 6 May 2007 (UTC)[reply]

GeForce 8M series announced

While there is little if any info on the GeForce 7 article in regards to the mobility chips, can we at least attempt to integrate the info on the GeForce 8 mobile gpus? I'm just thinking if we don't start now, we'll never do it. I'll start a section and put the reference URL and such, assuming I stay 'unbusy' at work for a while... Bourgeoisdude 16:02, 10 May 2007 (UTC)[reply]

Well I hate to start a frame and not finish it...unexpected issues up. Any help would be greatly appreciated, thanks. Bourgeoisdude 16:32, 10 May 2007 (UTC)[reply]

Floating Point Performance

According to several websites the peak theoretical FLOPS don't reach ~ 500 Gigaflops because the MUL operation or something isn't always available. So should I change the FLOP count or say it is not always possible to reach this peak FLOP count because the MUL operation isn't available.

AMD's Radeon HD 2900 XT graphics processor - The Tech Report

And that page links this Beyond 3D page as its source NVIDIA G80: Architecture and GPU Analysis - Page 11

Someone look into this because I'm not completely knowledgeable with this FLOP stuff, just repeating what the site said. --Sat84 11:33, 15 May 2007 (UTC)[reply]

The best?

isn't the 8800 Ultra card currently the bestt in the world? shouldn't this be noted? --AnYoNe! 17:19, 22 May 2007 (UTC)[reply]

It all depend on on the POV, and NPOV. Plouiche 15:49, 25 May 2007 (UTC)[reply]
I think it was already noted on the main GeForce page, if not here, but a more accurate term would be "the fastest" rather than best. 74.103.180.140 17:15, 14 June 2007 (UTC)[reply]
Content like that also dates the page and that means more work in the near future when it is inevitably surpassed. It's also not the fastest in every case, such as in some older games that don't work quite right with the cards due to driver issues. --Swaaye 17:37, 14 June 2007 (UTC)[reply]

Power consumption

I'm reasonably sure what you've got posted as the 8800GTX's powerconsumption is wrong, and i remember reading in two places that the 8800 Ultra was a revision to the G80 that allowed it to consume less power then the 8800GTX, which I believe was rated for 177W.

yeah here we are: Firingsquad claims: "8800GTX 177W, 8800 Ultra, 175W, 8800GTS 147W". http://firingsquad.com/hardware/nvidia_geforce_8800_ultra/ Anand states: "Beyond cooling, NVIDIA has altered the G80 silicon. Though they could not go into the specifics, NVIDIA indicated that layout has been changed to allow for higher clocks. They have also enhanced the 90nm process they are using to fab the chips. Adjustments targeted at improving clock speed and reducing power (which can sometimes work against each other) were made. "


Im gonna go ahead and change that, till someone can proove me wrong.

Er you are wrong, I don't exactly know what max board power means but I'm sure the 8800 Ultra WILL draw more power as its core/shader/mem clocks are higher and the revision enabled higher clock-speeds not power consumption as I remember. Here's a link to some power consumption figures [5] and [6] but I just reverted to the old ones.--Sat84 04:29, 7 June 2007 (UTC)[reply]

GeForce 8800 GTX (XFX model pictured) Image

I notice the "GeForce 8800 GTX (XFX model pictured)" image is not very sharp, I was wondering if it would be okay/proper to upload an image of a sharper photo, but different model (Evga 8800GTS).

Not a huge/important thing, but curious.

--KittenMya 20:55, 14 June 2007 (UTC)

certainly. I was thinking of doing the same but I haven't had the will to pull the card out of my computer. :) --Swaaye 22:51, 14 June 2007 (UTC)[reply]

SLI on Intel MoBo??

i heard the guys at voodoo managed to make crossfire work on an nvidia boards. could it be done on an Intel boards too? —Preceding unsigned comment added by 203.130.242.203 (talk) 07:10, 9 September 2007 (UTC)[reply]

8800GT specs

I know http://forums.firingsquad.com/firingsquad/board/message?board.id=hardware&message.id=110268 isn't exactly a reliable source, but it's probably not far wrong in this case. I can't think of a clean way to tag the 8800GT line as "not certain", so if people feel strongly enough about it, undo the edit.

AntiStatic 07:21, 6 October 2007 (UTC)[reply]

thanks. --Extremepilot 22:03, 7 October 2007 (UTC)[reply]

GeForce 8800 GT Pictures

Link to forum with images can be found here: http://www.theinquirer.net/gb/inquirer/news/2007/10/08/gorge-8800-gt-pics if someone can please add these to the 8800gt section and educate me on how to upload these with the proper license and proof of that license, i would be very grateful - i have uploaded them as gfdl but was asked to provide proof... thanks in advance --Extremepilot 15:31, 8 October 2007 (UTC)[reply]