How are 3D models
displayed? Describe and explain what an API and a Graphics pipeline are.
API - Many game engines use API. This stands for Application program interface. This is a bunch of routines, protocols, and tools for building software applications. In fact a good quality API makes it easier to develop a program by providing all the building blocks in which a programmer will then put together. Operating environments such as Microsoft windows actually provide an API so that programmers can create their own applications. Mainly, API's are created for programmers. However, they are now being used by random users as they guarantee that all programs using a common API will have similar interfaces. This means it is easier for users to learn new programs. In addition to 3D API's, any company developing a graphical application typically will have to rewrite the graphics part of it for all of the operating system.
Direct 3D - This was firstly developed by Microsoft and then tweaked by other companies. However Microsoft's version stayed strong and was seen as the best Direct 3D to use. This is an API used to manipulate 3D objects and to also display them. Direct 3D basically provides many programmers with a way to develop 3D programs which can display whatever graphics acceleration device is installed on to a machine. Many computer games use Direct 3D such as cube world. This is because the creators of cube world liked how it gave their game a better sense of play and plus, many people loved to play this game. In fact, every single 3D accelerator cards used in PC's can actually support direct 3D. Direct 3D is more commonly known as 3Dx as this was the one created by Microsoft and through the years it has said to be the best type for using on computer games e.c.t.
This is from the game cube world.
Graphics pipeline - In the term of 3D computer graphics, the terms graphic pipeline or even rendering pipeline usually refer to the way in which 3D mathematical information is contained within different objects scenes/ environments and then converted into either a video or a image. Mainly, the graphics pipeline basically accepts some representation of a 3D primitive as a input; in which then usually results into a 2D raster image as a output. In fact, Direct 3D and OpenGL share similar graphics pipelines as they are both two notable 3D graphics standards
Stages of the Graphics Pipeline -
Firstly, 3D geometric primitives are used. Basically, the scene is created out of geometric primitives. A very popular and most best way to do this is by using the shape of a triangle, as triangles are well suited to this as they always exist on a single plane.
Secondly, Modelling and transformation are looked into. When it comes down to 3D being involved, transformation slightly change as more co-ordinates are added. Normally you would have 2 axis known as X and Y whereas in the 3D world you get more such as "Z". This means you get more accurate Transformation which means its easy when modelling an object which is 3D.
Thirdly, Camera transformation. Usually what you do is transform the 3D world co-ordinate with the camera as the simple origin. This way there will be better co-ordination in the 3D world.
Fourthly, lighting. Illuminate according to lighting and reflectance. e.g. if there is a table in a room and the table is a magnificent bright white color; but in a totally dark room, The camera will see the table as black when it is actually a bright white color. In addition, the lighting values between vertices are carefully interpolated when it comes to rasterization.
Next is projection transformation. If you are using a perspective project, objects which are distant from the camera are made smaller. This is purely created by dividing the X and Y co-ordinates of each vertex of each primitive by its Z co-ordinate as this represents its distance from the camera. However, in a orthographic projection, objects retain their original size regardless of distance from the camera.
Next is scan conversation or rasterization. Rasterization is the process by which the 2D image space representation of the scene is into raster format and then the correct resulting pixel values are determined. This means that from there on operations will be simply carried out on each single pixel. In addition, this stage is rather complex, involving many steps often referred as a group under the name of a simple pixel pipeline.
Then, texturing, fragment shading. At this stage we are nearing the end of the pipeline. In this process, individual fragments are assigned a color based on values interpolated from the vertices during rasterization, from a texture in the actual memory, or from a shader program.
http://unit66chris.blogspot.co.uk/search/label/6.%20Constraints
http://static.vizworld.com/wp-content/uploads/2010/06/directx-logo.jpg
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5t6dYR02dpEyTmR0tYHzZCMImJdBUPcHDHRRewidaBg2JEM7NHMq1BsHHX1NPHIkTkVZaMEKx4qzE-JyTyVkNfLNxjZD3yWlmwnMx0AiE0ZXuMb2ivrBIs9imvWvPIHGmA6hYfhLjFri8/s1600/02.jpg
http://www.youtube.com/watch?v=HC3JGG6xHN8



No comments:
Post a Comment