3/14/2024 0 Comments Opengl tessellation example reddit![]() ![]() If you go back to what is arguably the earliest hardware that at all resembled the modern 3D pipeline (it predates DirectX, and the DirectX pipeline was greatly inspired by it, according to some article I read on here a while back by one of the guys responsible for the DirectX project), the original Play Station, it mostly consisted of two parts: a "geometry engine" embedded in the CPU which acted like a cross between an SIMD unit and a modern vertex shader ^(it took as inputs a vertex, a light vector, and a transformation matrix, and produced a transformed vertex with an associated color according to a basic diffuse lighting calculation), and a "dumb" rasterizer chip (which knew nothing about 3D, it just took basic 2D/"screenspace" vertex coordinates, color values, and texture coordinates, and naively combined them with no regard to perspective correctness). That may seem like a silly distinction, but again, if your mental model is incorrect as a newcomer then you're going to have a hard time. Sometimes the pixel shader interprets them as a color, but they're really just floating point numbers. Until then, there are values which are passed from the main application source code to the vertex / pixel shaders, which then decide what to do with those values. ![]() There is only one color: the RGB that ultimately shows up on the screen. EDIT: This paragraph is incorrect, see comment below.Īlso, the whole idea of "color" in a vertex shader is mistaken. There used to be no pixel shaders, which is why vertex shaders had to be used to do shading. It could make it a grayscale color, it could invert it, it could use only one of the four channels, etc. The pixel shader can be set up to interpret that value as anything. The vertex shader itself has nothing to do with color except calculating an initial value and passing it to the pixel shader. They're dry reading but not difficult.Īctually, the per-vertex color is passed into the pixel shader, which then decides what to do with it. Honestly it wasn't until I got up enough courage to sit down with the OpenGL specs and read through them that everything clicked. It took me years as a kid to finally grok this, but once I learned it, it turned out to be very simple. Sometimes I wonder if it's overly complicated, or if the problem domain is just complicated. Then there are more advanced/recent concepts like a "geometry program," which lets you generate triangles based on vertices or edges. Or you could write a fragment program to change the color of each pixel from red to green and back based on time. For example you could write a vertex program which moves each vertex in a sinewave pattern based on time. ![]() For example you could change the depth written to the Z-buffer, or you could cause the pixel to be skipped based on some criteria, like whether the texture color is pink.Īnyway, it's just a tiny program that executes either per-vertex or per-pixel. "Program" is also a better name than "shader" because there are things you can do per-pixel other than change the color. OpenGL insists on calling it a "fragment program" because with certain forms of antialiasing, there are multiple "fragments" per pixel. That brings us to "pixel shader." That's actually a good name in order for beginners to learn the concept, but it's imprecise. How do you "shade" a vertex? That implies color, when in fact vertex "shading" is really about deforming the position of vertices. Part of the confusion is the confusing terminology. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |