This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "2.5D" – news ·newspapers ·books ·scholar ·JSTOR(June 2023) (Learn how and when to remove this message) |
![]() |
Part of a series on |
Video game graphics |
---|
2.5D (basic pronunciationtwo-and-a-half dimensional) perspective refers togameplay or movement in avideo game orvirtual reality environment that is restricted to atwo-dimensional (2D) plane with little to no access to athird dimension in a space that otherwiseappears to be three-dimensional and is often simulated and rendered in a 3D digital environment.
This is similar but different from pseudo-3D perspective (sometimes called three-quarter view when the environment is portrayed from an angled top-down perspective), which refers to2D graphical projections and similar techniques used to cause images or scenes to simulate the appearance of beingthree-dimensional (3D) when in fact they are not.
By contrast, games, spaces or perspectives that are simulated and rendered in 3D and used in 3D level design are said to betrue 3D, and 2D rendered games made to appear as 2D without approximating a 3D image are said to betrue 2D.
Common in video games, 2.5D projections have also been useful ingeographic visualization (GVIS) to help understand visual-cognitive spatial representations or 3D visualization.[1]
The termsthree-quarter perspective andthree-quarter view trace their origins to thethree-quarter profile inportraiture andfacial recognition, which depicts a person's face that is partway between a frontal view and a side view.[2]
Inaxonometric projection andoblique projection, two forms ofparallel projection, the viewpoint is rotated slightly to reveal other facets of the environment than what are visible in atop-down perspective or side view, thereby producing a three-dimensional effect. An object is "considered to be in an inclined position resulting in foreshortening of all three axes",[3] and the image is a "representation on a single plane (as a drawing surface) of a three-dimensional object placed at an angle to the plane of projection."[3] Lines perpendicular to the plane become points, lines parallel to the plane have true length, and lines inclined to the plane are foreshortened.
They are popular camera perspectives among2D video games, most commonly those released for16-bit or earlier andhandheld consoles, as well as in laterstrategy androle-playing video games. The advantage of these perspectives is that they combine the visibility and mobility of atop-down game with the character recognizability of aside-scrolling game. Thus the player can be presented an overview of the game world in the ability to see it from above, more or less, and with additional details in artwork made possible by using an angle: Instead of showing a humanoid in top-down perspective, as a head and shoulders seen from above, the entire body can be drawn when using a slanted angle; turning a character around would reveal how it looks from the sides, the front and the back, while the top-down perspective will display the same head and shoulders regardless.
There are three main divisions of axonometric projection:isometric (equal measure),dimetric (symmetrical and unsymmetrical), andtrimetric (single-view or only two sides). The most common of these drawing types inengineering drawing is isometric projection. This projection is tilted so that all three axes create equal angles at intervals of 120 degrees. The result is that all three axes are equally foreshortened. In video games, a form of dimetric projection with a 2:1 pixel ratio is more common due to the problems of anti-aliasing and square pixels found on most computer monitors.
Inoblique projection typically all three axes are shown without foreshortening. All lines parallel to the axes are drawn to scale, and diagonals and curved lines are distorted. One tell-tale sign of oblique projection is that the face pointed toward the camera retains its right angles with respect to the image plane.[clarification needed]
Two examples of oblique projection areUltima VII: The Black Gate andPaperboy. Examples of axonometric projection includeSimCity 2000, and the role-playing gamesDiablo andBaldur's Gate.
In three-dimensional scenes, the term billboarding is applied to a technique in which objects are sometimes represented by two-dimensional images applied to a single polygon which is typically kept perpendicular to the line of sight. The name refers to the fact that objects are seen as if drawn on abillboard. This technique was commonly used in early 1990s video games when consoles did not have the hardware power to render fully 3D objects. This is also known as a backdrop. This can be used to good effect for a significant performance boost when the geometry is sufficiently distant that it can be seamlessly replaced with a 2Dsprite. In games, this technique is most frequently applied to objects such as particles (smoke, sparks, rain) and low-detail vegetation. It has since become mainstream, and is found in many games such asRome: Total War, where it is exploited to simultaneously display thousands of individual soldiers on a battlefield. Early examples include early first-person shooters likeMarathon Trilogy,Wolfenstein 3D,Doom,Hexen andDuke Nukem 3D as well as racing games likeCarmageddon andSuper Mario Kart and platformers likeSuper Mario 64.
Skyboxes and skydomes are methods used to easily create a background to make a gamelevel look bigger than it really is. If the level is enclosed in a cube, the sky, distant mountains, distant buildings, and other unreachable objects are rendered onto the cube's faces using a technique calledcube mapping, thus creating the illusion of distant three-dimensional surroundings. Askydome employs the same concept but uses asphere orhemisphere instead of a cube.
As a viewer moves through a 3D scene, it is common for the skybox or skydome to remain stationary with respect to the viewer. This technique gives the skybox the illusion of being very far away since other objects in the scene appear to move, while the skybox does not. This imitates real life, where distant objects such as clouds, stars and even mountains appear to be stationary when the viewpoint is displaced by relatively small distances. Effectively, everything in a skybox will always appear to be infinitely distant from the viewer. This consequence of skyboxes dictates that designers should be careful not to carelessly include images of discrete objects in the textures of a skybox since the viewer may be able to perceive the inconsistencies of those objects' sizes as the scene is traversed.
In some games, sprites are scaled larger or smaller depending on its distance to the player, producing the illusion of motion along the Z (forward) axis.Sega's 1986 video gameOut Run, which runs on theSega OutRunarcade system board, is a good example of this technique.
InOut Run, the player drives a Ferrari into depth of the game window. The palms on the left and right side of the street are the samebitmap, but have been scaled to different sizes, creating the illusion that some are closer than others. The angles of movement are "left and right" and "into the depth" (while still capable of doing so technically, this game did not allow making a U-turn or going into reverse, therefore moving "out of the depth", as this did not make sense to the high-speed game play and tense time limit). Notice the view is comparable to that which a driver would have inreality when driving a car. The position and size of any billboard is generated by a (complete 3D) perspective transformation as are the vertices of the poly-line representing the center of the street. Often the center of the street is stored as a spline and sampled in a way that on straight streets every sampling point corresponds to one scan-line on the screen. Hills and curves lead to multiple points on one line and one has to be chosen. Or one line is without any point and has to be interpolated lineary from the adjacent lines. Very memory intensive billboards are used inOut Run to draw corn-fields and water waves which are wider than the screen even at the largest viewing distance and also inTest Drive to draw trees and cliffs.
Drakkhen was notable for being among the firstrole-playing video games to feature a three-dimensional playing field. However, it did not employ a conventional 3D game engine, instead emulating one using character-scaling algorithms. The player's party travels overland on a flat terrain made up of vectors, on which 2D objects are zoomed.Drakkhen features an animated day-night cycle, and the ability to wander freely about the game world, both rarities for a game of its era. This type of engine was later used in the gameEternam.
Some mobile games that were released on the Java ME platform, such as the mobile version ofAsphalt: Urban GT andDriver: L.A. Undercover, used this method for rendering the scenery. While the technique is similar to some of Sega's arcade games, such asThunder Blade andCool Riders and the 32-bit version ofRoad Rash, it uses polygons instead of sprite scaling for buildings and certain objects though it looks flat shaded. Later mobile games (mainly from Gameloft), such asAsphalt 4: Elite Racing and the mobile version ofIron Man 2, uses a mix of sprite scaling and texture mapping for some buildings and objects.
Parallaxing refers to when a collection of2Dsprites or layers of sprites are made to move independently of each other and/or the background to create a sense of added depth.[4]: 103 This depth cue is created by relative motion of layers. The technique grew out of themultiplane camera technique used intraditional animation since the 1940s.[5] This type of graphical effect was first used in the 1982arcade gameMoon Patrol.[6]Examples include the skies inRise of the Triad, the arcade version ofRygar,Sonic the Hedgehog,Street Fighter II,Shadow of the Beast andDracula X Chronicles, as well asSuper Mario World.
Mode 7, a display system effect that included rotation and scaling, allowed for a 3D effect while moving in any direction without any actual 3D models, and was used to simulate 3D graphics on theSNES.
Ray casting is afirst person pseudo-3D technique in which a ray for every vertical slice of the screen is sent from the position of the camera. These rays shoot out until they hit an object or wall, and that part of the wall is rendered in that vertical screen slice.[8] Due to the limited camera movement and internally 2D playing field, this is often considered 2.5D.[9]
Bump mapping,normal mapping andparallax mapping are techniques applied totextures in3D rendering applications such asvideo games to simulate bumps and wrinkles on the surface of an object without using morepolygons. To the end user, this means that textures such as stone walls will have more apparent depth and thus greater realism with less of an influence on the performance of the simulation.
Bump mapping is achieved by perturbing thesurface normals of an object and using agrayscale image and the perturbed normal during illumination calculations. The result is an apparently bumpy surface rather than a perfectly smooth surface although the surface of the underlying object is not actually changed. Bump mapping was introduced by Blinn in 1978.[10]
Innormal mapping, the unitvector from the shading point to the light source isdotted with the unit vector normal to that surface, and the dot product is the intensity of the light on that surface. Imagine a polygonal model of a sphere—you can only approximate the shape of the surface. By using a 3-channel bitmapped image textured across the model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial dimension (x,y andz). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques.
Parallax mapping (also calledoffset mapping orvirtual displacement mapping) is an enhancement of the bump mapping and normal mapping techniques implemented by displacing the texture coordinates at a point on the rendered polygon by a function of the view angle in tangent space (the angle relative to the surface normal) and the value of theheight map at that point. At steeper view-angles, the texture coordinates are displaced more, giving the illusion of depth due toparallax effects as the view changes.
The term is also used to describe ananimation effect commonly used in music videos and, more frequently, title sequences. Brought to wide attention by the motion pictureThe Kid Stays in the Picture, an adaptation of film producerRobert Evans's memoir, it involves the layering and animating of two-dimensional pictures in three-dimensional space. Earlier examples of this technique includeLiz Phair's music video "Down" (directed byRodney Ascher) and "A Special Tree" (directed by musicianGiorgio Moroder).
On a larger scale, the 2018 movieIn Saturn's Rings used over 7.5 million separate two-dimensional images, captured in space or by telescopes, which were composited and moved using multi-plane animation techniques.
The term also refers to an often-used effect in the design oficons andgraphical user interfaces (GUIs), where a slight 3D illusion is created by the presence of a virtual light source to the left (or in some cases right) side, and above a person'scomputer monitor. The light source itself is always invisible, but its effects are seen in the lighter colors for the top and left side, simulating reflection, and the darker colours to the right and below of such objects, simulating shadow.
An advanced version of this technique can be found in some specialised graphic design software, such as Pixologic'sZBrush. The idea is that the program's canvas represents a normal 2D painting surface, but that the data structure that holds the pixel information is also able to store information with respect to az-index, as well material settings,specularity, etc. Again, with this data it is thus possible to simulate lighting, shadows, and so forth.
The first video games that used pseudo-3D were primarilyarcade games, the earliest known examples dating back to the mid-1970s, when they began usingmicroprocessors. In 1975,Taito releasedInterceptor,[11] an earlyfirst-person shooter andcombat flight simulator that involved piloting ajet fighter, using an eight-wayjoystick to aim with a crosshair and shoot at enemy aircraft that move in formations of two and increase/decrease in size depending on their distance to the player.[12] In 1976,Sega releasedMoto-Cross, an early black-and-whitemotorbikeracing video game, based on themotocross competition, that was most notable for introducing an early three-dimensionalthird-person perspective.[13] Later that year,Sega-Gremlin re-branded the game asFonz, as a tie-in for the popularsitcomHappy Days.[14] Both versions of the game displayed a constantly changing forward-scrolling road and the player's bike in a third-person perspective where objects nearer to the player are larger than those nearer to the horizon, and the aim was to steer the vehicle across the road, racing against the clock, while avoiding any on-coming motorcycles or driving off the road.[13][14] That same year also saw the release of two arcade games that extended the cardriving subgenre into three dimensions with afirst-person perspective: Sega'sRoad Race, which displayed a constantly changing forward-scrolling S-shaped road with two obstacle race cars moving along the road that the player must avoid crashing while racing against the clock,[15] andAtari'sNight Driver, which presented a series of posts by the edge of the road though there was no view of the road or the player's car. Games usingvector graphics had an advantage in creating pseudo-3D effects. 1979'sSpeed Freak recreated the perspective ofNight Driver in greater detail.
In 1979,Nintendo debutedRadar Scope, ashoot 'em up that introduced a three-dimensional third-person perspective to the genre, imitated years later byshooters such asKonami'sJuno First andActivision'sBeamrider.[16] In 1980, Atari'sBattlezone was a breakthrough for pseudo-3D gaming, recreating a 3D perspective with unprecedented realism, though the gameplay was still planar. It was followed up that same year byRed Baron, which used scaling vector images to create a forward scrollingrail shooter.
Sega's arcade shooterSpace Tactics, released in 1980, allowed players to take aim using crosshairs and shoot lasers into the screen at enemies coming towards them, creating an early 3D effect.[17] It was followed by other arcade shooters with a first-person perspective during the early 1980s, includingTaito's 1981 releaseSpace Seeker,[18] and Sega'sStar Trek in 1982.[19] Sega'sSubRoc-3D in 1982 also featured a first-person perspective and introduced the use ofstereoscopic 3-D through a special eyepiece.[20] Sega'sAstron Belt in 1983 was the firstlaserdisc video game, usingfull-motion video to display the graphics from a first-person perspective.[21]Third-person rail shooters were also released in arcades at the time, including Sega'sTac/Scan in 1982,[22]Nippon'sAmbush in 1983,[23]Nichibutsu'sTube Panic in 1983,[24] and Sega's 1982 releaseBuck Rogers: Planet of Zoom,[25] notable for its fast pseudo-3D scaling and detailed sprites.[26]
In 1981, Sega'sTurbo was the first racing game to usesprite scaling with full-colour graphics.[26]Pole Position byNamco is one of the first racing games to use the trailing camera effect that is now so familiar[citation needed]. In this particular example, the effect was produced by linescroll—the practice of scrolling each line independently in order to warp an image. In this case, the warping would simulate curves and steering. To make the road appear to move towards the player, per-line color changes were used, though many console versions opted forpalette animation instead.
Zaxxon, a shooter introduced by Sega in 1982, was the firstgame to use isometricaxonometric projection, from which its name is derived. Though Zaxxon's playing field is semantically 3D, the game has many constraints which classify it as 2.5D: a fixed point of view, scene composition from sprites, and movements such as bullet shots restricted to straight lines along the axes. It was also one of the first video games to display shadows.[27] The following year, Sega released the first pseudo-3Disometric platformer,Congo Bongo.[28] Another early pseudo-3Dplatform game released that year wasKonami'sAntarctic Adventure, where the player controls a penguin in a forward-scrolling third-person perspective while having to jump over pits and obstacles.[29][30][31] It was one of the earliest pseudo-3D games available on a computer, released for theMSX in 1983.[31] That same year,Irem'sMoon Patrol was aside-scrollingrun & gun platform-shooter that introduced the use of layeredparallax scrolling to give a pseudo-3D effect.[32] In 1985,Space Harrier introduced Sega's "Super Scaler" technology that allowed pseudo-3Dsprite-scaling at highframe rates,[33] with the ability to scale 32,000sprites and fill a moving landscape with them.[34]
The first original homeconsole game to use pseudo-3D, and also the first to use multiple camera angles mirrored on television sports broadcasts, wasIntellivision World Series Baseball (1983) byDon Daglow andEddie Dombrower, published byMattel. Its television sports style of display was later adopted by 3Dsports games and is now used by virtually all major team sports titles. In 1984, Sega ported several pseudo-3D arcade games to theSega SG-1000 console, including a smooth conversion of the third-person pseudo-3D rail shooterBuck Rogers: Planet of Zoom.[33]
By 1989, 2.5D representations were surfaces drawn with depth cues and a part of graphic libraries like GINO.[35] 2.5D was also used in terrain modeling with software packages such as ISM from Dynamic Graphics, GEOPAK from Uniras and the Intergraph DTM system.[35] 2.5D surface techniques gained popularity within the geography community because of its ability to visualize the normal thickness to area ratio used in many geographic models; this ratio was very small and reflected the thinness of the object in relation to its width, which made it the object realistic in a specific plane.[35] These representations were axiomatic in that the entire subsurface domain was not used or the entire domain could not be reconstructed; therefore, it used only a surface and a surface is one aspect not the full 3D identity.[35]
The specific term "two-and-a-half-D" was used as early as 1994 by Warren Spector in an interview in the North American premiere issue ofPC Gamer magazine. At the time, the term was understood to refer specifically to first-person shooters likeWolfenstein 3D andDoom, to distinguish them fromSystem Shock's "true" 3D engine.
With the advent of consoles andcomputer systems that were able to handle several thousandpolygons (the most basic element of3D computer graphics) per second and the usage of 3D specializedgraphics processing units, pseudo-3D became obsolete. But even today, there are computer systems in production, such as cellphones, which are often not powerful enough to displaytrue 3D graphics, and therefore use pseudo-3D for that purpose. Many games from the 1980s'pseudo-3D arcade era and16-bit console era are ported to these systems, giving the manufacturers the possibility to earn revenues from games that are several decades old.
The resurgence of 2.5D or visual analysis, in natural and earth science, has increased the role of computer systems in the creation of spatial information in mapping.[1] GVIS has made real the search for unknowns, real-time interaction with spatial data, and control over map display and has paid particular attention to three-dimensional representations.[1] Efforts in GVIS have attempted to expand higher dimensions and make them more visible; most efforts have focused on "tricking" vision into seeing three dimensions in a 2D plane.[1] Much like 2.5D displays where the surface of a three-dimensional object is represented but locations within the solid are distorted or not accessible.[1]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(March 2023) (Learn how and when to remove this message) |
The reason for using pseudo-3D instead of "real" 3D computer graphics is that the system that has to simulate a 3D-looking graphic is not powerful enough to handle the calculation-intensive routines of 3D computer graphics, yet is capable of using tricks of modifying 2D graphics likebitmaps. One of these tricks is to stretch a bitmap more and more, therefore making it larger with each step, as to give the effect of an object coming closer and closer towards the player.
Even simple shading and size of an image could be considered pseudo-3D, as shading makes it look more realistic. If the light in a 2D game were 2D, it would only be visible on the outline, and because outlines are often dark, they would not be very clearly visible. However, any visible shading would indicate the usage of pseudo-3D lighting and that the image uses pseudo-3D graphics. Changing the size of an image can cause the image to appear to be moving closer or further away, which could be considered simulating a third dimension.
Dimensions are the variables of the data and can be mapped to specific locations in space; 2D data can be given 3D volume by adding a value to thex,y, orz plane. "Assigning height to 2D regions of a topographic map" associating every 2D location with a height/elevation value creates a 2.5D projection; this is not considered a "true 3D representation", however is used like 3D visual representation to "simplify visual processing of imagery and the resulting spatial cognition".