Computer generated images by default render the entire scene in perfect focus. The desire to render the world around us is common to many applications. As part of this ‘virtual world’ it is natural to include representations of humans. A realistic visual approximation of a human requires a polygon mesh of significant complexity, especially as it has to be sufficiently tessellated to allow smooth deformations of the model. Image-based modeling and rendering techniques have recently received much attention as a powerful alternative to traditional geometry-based techniques for image synthesis. Instead of geometric primitives, a collection of sample images are used to render novel views.
Definition of Rendering Graphics
Graphics Rendering is the process of generating an image from a model, by means of a software program. The model is a description of three dimensional objects in a strictly defined language or data structure. It would contain geometry, viewpoint, texture and lighting information. The image is a digital image or raster graphics image. The term may be by analogy with an “artist’s rendering” of a scene. ‘Rendering’ is also used to describe the process of calculating effects in a video editing file to produce final video output.
It is one of the major sub-topics of 3D computer graphics, and in practice always connected to the others. In the ‘graphics pipeline’ it’s the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s onward, it has become a more distinct subject.
Figure: Rendering Example
There are two categories of rendering: pre-rendering and real-time rendering. The striking difference between the two lies in the speed at which the computation and finalization of images takes place.
- Real-Time Rendering:The prominent rendering technique using in interactive graphics and gaming where images must be created at a rapid pace. Because user interaction is high in such environments, real-time image creation is required. Dedicated graphics hardware and pre-compiling of the available information has improved the performance of real-time rendering.
- Pre-Rendering:This rendering technique is used in environments where speed is not a concern and the image calculations are performed using multi-core central processing units rather than dedicated graphics hardware. This rendering technique is mostly used in animation and visual effects, where photorealism needs to be at the highest standard possible.
For these rendering types, the three major computational techniques used are:
- Scanline: A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In 3D rendering, triangles and polygons in space might be primitives
- Ray casting: Ray casting is primarily used for real time simulations, such as those used in 3D computer games and cartoon animations, where detail is not important, or where it is more efficient to manually fake the details in order to obtain better performance in the computational stage. This is usually the case when a large number of frames need to be animated. The results have a characteristic ‘flat’ appearance when no additional tricks are used, as if objects in the scene were all painted with matt finish, or had been lightly sanded.
- Radiosity: Radiosity is a method which attempts to simulate the way in which reflected light, instead of just reflecting to another surface, also illuminates the area around it. This produces more realistic shading and seems to better capture the ‘ambience’ of an indoor scene, a classic example used is of the way that shadows ‘hug’ the corners of rooms. The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it.
Features of Rendering Graphics
A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together.
- shading — how the color and brightness of a surface varies with lighting
- texture-mapping — a method of applying detail to surfaces
- bump-mapping — a method of simulating small-scale bumpiness on surfaces
- fogging/participating medium — how light dims when passing through non-clear atmosphere or air
- shadows — the effect of obstructing light
- soft shadows — varying darkness caused by partially obscured light sources
- reflection — mirror-like or highly glossy reflection
- transparency — sharp transmission of light through solid objects
- translucency — highly scattered transmission of light through solid objects
- refraction — bending of light associated with transparency
- indirect illumination — surfaces illuminated by light reflected off other surfaces, rather than directly from a light source
- caustics (a form of indirect illumination) — reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object
- depth of field — objects appear blurry or out of focus when too far in front of or behind the object in focus
- motion blur — objects appear blurry due to high-speed motion, or the motion of the camera
 Nisha, Vijaya Goel. “A Review of Image-based Rendering Techniques.”
 G. Ryder and A. M. Day, “Survey of Real-Time Rendering Techniques for Crowds”, Volume 24 (2005), number 2 pp. 203–215
 “Rendering”, available online at: http://graphics.wikia.com/wiki/Rendering
 “Rendering”, available online at: https://www.techopedia.com/definition/9163/rendering
 “Introduction to Computer Graphics”, available online at: http://slideplayer.com/slide/5062388/