Jump to content

Talk:Texture mapping

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 136.176.8.18 (talk) at 15:05, 4 September 2007 (I commented that the caption for the Doom picture was inaccurate, because Doom does not use affine mapping, nor is that why the walls are all vertical.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Texture mapping vs parametrization

The article needs a serious rewriting. There is a deep confusion between the parametrization problem and the texturing one. The code is by far unnecessary and probably make everithing less clear. I will probably remove it and rewrite some stuff in the next days. ALoopingIcon 09:15, 7 February 2006 (UTC)[reply]

Different types of texture mapping

I think some mention should go to popular types of texturing, such as environment reflection mapping, normal/bump mapping, etc., as well as the use of transparency in texturing.

Incomplete sentence

At the moment, this incomplete sentence is in the article & I can't figure out what it was supposed to say: "Before Descent and Duke Nukem 3D, successfully used portal rendering and arbitrary orientation of the walls." Can anyone fix? Elf | Talk 05:02, 26 April 2006 (UTC)[reply]

What???

I removed this:

Between 1990 and 2000, various hybrid methods existed that mixed floating point and fractions and additionally fixed-point numbers, and mixed affine and perspective-correct texture-mapping. The mix basically uses perspective-correct texture-mapping on a large scale, but divides every polygon in 2D image-space into either quadrants (Terminal Velocity:8x8), small spans (Descent:4x1, Quake:16x1) or lines of constant z (Duke Nukem 3D, System Shock and Flight Unlimited). The constant z approach is known from Pseudo-3D. Pseudo-3D does not allow rotation of the camera, while Doom and Wacky Wheels restrict it to only one axis. Before Descent and Duke Nukem 3D, successfully used portal rendering and arbitrary orientation of the walls. 2D raytracing of a grid was added to the mix and called ray-casting. This was used in Wolfenstein 3D and Ultima Underworld. Demos often used static screen-to-texture look-up tables generated by a ray tracer to render and rotate simple symmetrical objects such as spheres and cylindrical tunnels.

After 2000, perspective-correct texture mapping became widely used via floating point numbers. Perspective-correct texture mapping adds complexity, which can easily be paralleled and pipelined costing only silicon. And it adds one divide per pixel. In this respect, a graphics card has two advantages over a CPU. First, it can trade high throughput for low latency. Second, it often has a similar z and 1/z from a former calculation. Floating point numbers have the advantage that some of the bits belong to the exponent and only need to be added. The improvement from using long floating point numbers is immense, as rounding error causes several problems during rendering. For instance (this is not a collection of examples, but a complete list for the basic texture mapper), in the transformation stage, polygons do not stay convex and have to be split into trapezoids afterwards. In the edge interpolation, the polygons do not stay flat and back face culling has to be repeated every for span, otherwise the renderer may crash (with long variables, this bug may need hours to show up--or even years). Also, because of rounding in the span interpolation, the texture coordinates may overflow, so a guard band and/or tiling is used.

Ray tracers are able to run real-time or high resolution. They use Barycentric coordinates, which produce holes at the vertices. But due to the high precision used in ray-tracing, it is unlikely that any ray will pass through these holes.

It makes no sense for me. AzaToth 03:23, 28 April 2006 (UTC)[reply]

I support your decision. This article is still being worked, those things were just out of place.

  1. This first paragraph was a de-facto useless historical information. This won't apply anymore even to mobile devices.
  2. This whole FP blah is definatly out of place (I also agree it's quite senseless considering the topic).
  3. Ray-tracing considerations should not be here just because they use FP barycentric coords.

MaxDZ8 talk 06:46, 28 April 2006 (UTC)[reply]

I added {{cleanup-rewrite}} because I think that's the reality. AzaToth 16:04, 28 April 2006 (UTC)[reply]
Is anyone actively working on this article? It really is one of the most important computer graphics related topics, yet it currently is in a pretty pitiful state. I'd like to pitch in, but there's so much to do, I'm not too sure where to start. How about we brainstorm an article overview, and hand out the sections? In any case, I'll see what I can do. Nezbie 04:16, 2 May 2006 (UTC)[reply]

I am not. The most evident problem is this is now assumed to be just there, thus becoming Deep Magic. I am already having troubles working on shaders and level of detail (programming) so I'm sure I cannot handle this. This feature however traces back to the early ages of 3d graphics so maybe you can find something at http://accad.osu.edu/~waynec/history/PDFs/. MaxDZ8 talk 15:47, 2 May 2006 (UTC)[reply]

Comments about the Code

the code is not really related to what texture mapping is. Texture mapping is a rasterization problem. In the code given, texture co-ordinates are simply set. Also, most implementations of setting texture co-ords is NOT done in hardware, except if your'e using a vertex shader. The reason I mention all of this is because is has nothing to do with being perspectively correct! Perspective correct texturing involves corecting for the no-linear perpective transform. To correct for perspective, u, and v is divided by w in the rasterizer, otherwise you are dealing with affine texturing.

I hope other people agree - I dont have time to edit the original, but will contribute if anyone just removes it, and adds stubs.

When I first stumbled on this article, I was very tempted to remove the code at once. I felt this was too drastic wrt the rest of the community, so I just stated this just sets TexCoords. I support your proposal to remove the code completely.
The actual way tcCoords are generated is not meaningful, in the rasterizer, they're just there.
To other readers and better support your proposal, I say what you wrote is definetly correct.
In case another one agrees (or no one says for a while) I'll remove the code.

MaxDZ8 talk 07:27, 12 June 2006 (UTC)[reply]

The description of the code is incorrect, and the code itself is incorrect as well. That perspective correction is done in the rasterizer has already been mentioned, but beyond that this will generate a horrible seam. When you are on one side of the seam, your vertices may have a u coordinate near 1.0, but on the other side, they will have a coordinate near 0.0. Using only (x,y,z) to determine the texture will not allow you to pick either a 1.0 or 0.0 for points directly on the seam which will correspond to the other points in the current triangle, and what you will get is the entire texture appearing backwards across the single triangle. Additional information needs to be known in order to compensate for this. - Rainwarrior 19:55, 1 July 2006 (UTC)[reply]
That's sure. It's a well known issue with naïve spheremapping. It doesn't matter anyway since the whole thing was out of place. I'm glad someone dropped the code.
MaxDZ8 talk 09:02, 2 July 2006 (UTC)[reply]

Some rewriting July 1, 2006

I've rewritten the lead, trying to preserve the same information... though it still feels disorganized to me. I'm not even sure what to do with the "history" section... probably remove it. It appears to be trying (badly) to describe the Bresenham algorithm, but also at the same time the difference between affine and perspective-correct texture mapping (with some strange confusion between the two... why keep mentioning "fractions"?). I'm thinking the section should be removed and just replaced with more description of those two things. (I'll think about it, maybe do that edit in a few minutes.) I've also noticed that the texture filtering, bilinear interpolation, and nearest neighbor interpolation articles seem to be pretty bad as well. - Rainwarrior 20:18, 1 July 2006 (UTC)[reply]

Okay, I've finished rewriting the information that was there. I think it needs some organization, but at least it's more accurate now, I hope. There are some details I don't have, like I don't know exactly when perspective correct cards hit the market (I said "recently". I hate being so vague.); there's actually no article on affine or perspective correct texture mapping. This would probably be a good addition to this article (I wouldn't suggest making a new article for it) as its own section, perhaps. - Rainwarrior 20:36, 1 July 2006 (UTC)[reply]

As far as I remember, my old Permedia2 on Pentium133 was already perspective correct. I hardly believe cards without this feature to be ever mass-marketed anyway so I would say it's at least 10 years this feature is commonplace. Texture projection however (division of tcCoords by .w coordinate) may be newer, I guess it was supported only by DX7 or DX6 (it was always supported by the GL) so it's a few years later.
Looks all but "recently" to me.
MaxDZ8 talk 09:02, 2 July 2006 (UTC)[reply]

Just to add one thing about the division of tcCo-ords by 'w'. This is done in the rasterizer, to perform perspective correct texturing. This is to deal with the fact that the projection is non linear.

When performing projective texture mapping, we use homogeneous texture coordinates, or coordinates in projective space.

When performing non-projective texture mapping, we use real texture coordinates, or coordinates in real space.

Software renderer

I have not looked at your (plural) age, but you seem not to have lived in the times when home computers needed software for texture mapping. I guess everybody is happy that these times are away and even JAVA on a mobile phone uses OpenGL (you tell me if it does). The reason for this section is, that wikipedia is full of games (articles about~) out of that time. These authors have an even lower grasp of how linear and perspective correct texture mapping have been intermixed then you. I understand that the paragraph maybe too hard to understand without pics, so who cares about DooM, Quake, or descent anyway? Arnero 17:17, 14 April 2007 (UTC)[reply]

I don't understand what you're talking about with regard to this article. Are you saying you'd like to see references to Doom and Quake in there? (There's one reference to Michael Abrash's discussion of perspective correct textures in Quake) I'm not sure exactly when perspective correct texturing appeared in games, but it was at least as early as that. The practice is much older than when it appeared in games, of course. I don't think OpenGL is really directly related to perspective correct textures; it's just an interface to the hardware (or software) which may or may not have that kind of texturing. Yes, a picture would be good to explain perspective correct textures. - Rainwarrior 17:42, 14 April 2007 (UTC)[reply]

Affine mapping in Doom?

According to the article, Doom has to have walls perfectly vertical and floors perfectly flat because it uses affine texture mapping. The way I understand it, Doom uses a raycasting algorithm that has nothing to do with the scanline / polygon methods referred to in the article. Also, I've played Doom and seen screenshots and it doesn't seem to have any of the artifacts of affine texture mapping. So is this caption correct? 136.176.8.18 15:05, 4 September 2007 (UTC)[reply]