Movatterモバイル変換


[0]ホーム

URL:


OpenGLBook.com

Preface: What is OpenGL?

On the most fundamental level, OpenGL is a software interface that allows aprogrammer to communicate with graphics hardware. Of course, there is much moreto it than that, and you will be glad to know that this book explains the finerdetails of OpenGL. But before we get our hands dirty and start coding, you'llneed to know a little about the history of Computer Graphics and OpenGL.

In the preface we'll explore the following topics:* The inception of computers and computer graphics* What OpenGL is and how it came to be* How computer graphics work* Hardware and software requirements for this book

Attention:

If you want to get straight into graphics programming without reading this lengthy history, you may skip straight to the "requirements" section below, read it, and start the next chapter. I have to stress, however, that having a thorough understanding of the history of computing, computer graphics, and OpenGL can be important in understanding future developments.

In The Beginning

Whether through writing, painting, or body language, imaging has always been animportant player in relaying information and chronicling history. Therequirement for visual feedback is so important that it is hard to acknowledgethe existence of something if you cannot see it. Bacteria, for instance, werepurely speculative before their visual discovery in the seventeenth century byAntonie van Leeuwenhoek who invented the microscope, but became an integralpart of modern science.

Computer data is represented in nothing more than electrical pulses, which arealso invisible to the naked eye. A method of displaying this data had to beinvented, so early computer scientists would get visual feedback from theirmachines through a series of lamps mounted onto boards or long perforated papertapes, so called "punch cards."

As you can imagine, this information was far from readable and muchinterpretation was required to convert the information into a human-readableformat. And even though computers were eventually equipped with electrictypewriters, the output was far from optimal.

Display: Cathode Ray Tubes

Ferdinand Braun, inventor of the Cathode Ray Tube

In 1897, Ferdinand Braun invented the CRT (Cathode Ray Tube) in Germany as atype of vacuum tube whose purpose was to display an image onto a screen. Youmay have seen or used them yourself in the form of glass-tube televisions andcomputer monitors that were the norm until very recently.

CRTs were already being used for television and oscilloscope output, butnobody had thought of combining this technology with computers. The first timethey were used to display computer output was in 1951 at MIT (the MassachusettsInstitute of Technology) where the Whirlwind computer was developed as a UnitedStates Navy flight simulator. CRTs allowed the operators to instantly see theoutput of a computer program without having to interpret punch cards, rows oflamps or sort through reams of printouts.

While the Whirlwind project itself wasn't very successful due to high operatingcost, it was an important step into the direction of computer graphics byintroducing the CRT as a viable computer output device. The CRT became animportant player in the development of computer graphics, and remained theoutput device of choice for over 50 years until it was replaced by newerflat panel technologies.

The First Interactions

The light pen used with Sketchpad (Source: 1)

Though CRTs allowed computers to display their output, it was mostly text thatwas simply meant to read out the computer's current state. This remained truefor some time since no one thought of the computer in any other fashion than apure computational device. It was not until 1961 when Ivan Sutherland developeda computer program called Sketchpad for his thesis at MIT that would change theway we look at computers dramatically.

Sutherland's Sketchpad program allowed users to draw geometrical shapes onto aCRT with a light pen in real-time, something that was groundbreaking at thetime and remains remarkable even many years after. It not only defined computergraphics, but also introduced the precursor of a GUI (pronounced "gooey," whichstands for Graphic User Interface) and laid the foundations of what was tobecome a concept known as Object Oriented Programming. Sketchpad created aparadigm shift in that computers were no longer simply number crunchingdevices, but could also be used to display geometric shapes.

FYI: Real-Time Computer Graphics

Real-time computer graphics are generated on the fly, and usually in response to the user's input from a mouse, keyboard, or any other input device. Real-time graphics are often applied in applications such as video games and design programs.

In 1968, Ivan Sutherland and Bob Sproull engineered another technological feat,namely "The Sword of Damocles," the forerunner of what we now call virtualreality. This system displayed simple three-dimensional wireframe models to theuser through a headset, suspended from a ceiling because of its weight. This initself may have been one of the first (if not the first) time a form of 3Dgraphics were generated by a computer.

Smaller, Faster, Cheaper

Of course, technology didn't just jump from punch cards to interactivegraphics; computers gradually evolved from massive machines to the smalldevices that you use every day.

Eniac, a famous first generation computer (Source: 2)

From the 1940s to the mid-1950s, computers used vacuum tubes for processing andwould take up entire rooms. A vacuum tube is a device that can modify anelectronic signal in some way or another, such as switching, which is afunction that is imperative to computing. The components required to assemble acomputer were big and ran hot to the touch. None of the machines built duringthis era were the same, and thus, their programs were not compatible with anyother machine. This era is usually referred to as thefirst generation ofcomputers and represents the first step in modern computer science. These arethe machines mentioned in the first section that used punch cards and lamps asoutput devices, so they contributed little to the development of computergraphics.

Transistors started to replace vacuum tubes during the mid 1950s, creating thesmaller, faster, cheaper, and more energy efficient computers of thesecondgeneration. Transistors not only allowed for the creation of smallercomputers due to their small size, but ushered in a whole new generation ofconsumer electronics as well. Radios, for example, could now be powered bybatteries and carried around, whereas before they were heavy stationary boxes.

But transistors themselves were not the Holy Grail for computing, and in themid-1960s, a new technology called the Integrated Circuit brought forth thethird generation of computers. Integrated Circuits miniaturized a certainfunction that would normally have been performed by a series of individualtransistors onto a single chip. During the third generation, many computerswere being equipped with devices such as the keyboard, monitors, and a new typeof software called the Operating System, which allowed the computer to runmultiple programs. An important Operating System from this era was called UNIX,which would influence many (if not all) following developments in OperatingSystems.

The Intel 4004 Microprocessor -- the little chip that started it all (Source: 3)

In 1971, a major breakthrough in the field of computing occurred with theinvention of the microprocessor by the Intel Corporation, ushering in thefourth generation of computers. Whereas CPUs (Central Processing Units)normally were boards with many Integrated Circuits soldered onto it, theIntel's 4004 microprocessor contained all of this functionality on a singlechip. Because of a cheaper production process, the computer slowly transitionedfrom being a specialized device used by large companies and governments tosomething much more accessible to the masses. We are currently still in thefourth generation of computers, since we have not yet moved away from microprocessors.

Personal Computing

The Apple II Plus (1979) showing color display capabilities (Source: 4)

The first Personal Computers started to appear in the mid to late 1970s, butwere regarded as enthusiast machines for hobbyists. This changed somewhat withthe release of the Apple II by Apple Computer in 1977, and the PET by CommodoreInternational. These machines popularized the concept of computers for thehome, but did not have too much to offer in terms of computer graphics.

This somewhat changed in the 1980s when technologies such as GUI wereintroduced to the personal computing market. The first dedicated graphicsadd-on cards also started to appear during this era, notably the CGA (ColorGraphics Adapter) by IBM was the first color graphics card for the IBM PCplatform, which would pave the way for future developments by standardizing amethod of drawing computer graphics but didn't offer much in terms of graphicscapabilities.

During the 1970s and 1980s, most video games ran on specialized systems, moviesmade use of computer animation only sparsely, and real-time 3D graphics werefor visualization purposes only since there was no consumer hardware that wasfast enough. These years were known for their many firsts on consumer hardware,but it wasn't until the late 1980s to the early 1990s when computer games tooka strong hold on the PC platform and a real push for better looking and betterperforming real-time graphics began.

One of the computer games that pushed what was possible on the hardware of theday was Wolfenstein 3D, a first-person shooter released in 1992 by id Software.While Wolfenstein wasn't truly 3D it defined the standard for future 3Dcomputer games. Only a year later, id Software released Doom, their first true3D game. For the first time, the player could explore their environment throughthe use of staircases and elevators, without being stuck to a certainelevation. Many games followed that imitated the look and feel of Doom, andwere aptly dubbed "Doom-clones." Doom used a software renderer to render itsreal-time graphics to the screen, as did all of the other games of the early1990s, but this was about to change.

OpenGL: The First Decade

SGI Logo

Silicon Graphics (commonly referred to as SGI) was a company founded in 1981that specialized in 3D computer graphics and developed software and hardwarespecifically for this purpose. One software library that SGI developed was IRISGL (Integrated Raster Imaging System Graphical Library) used for generating 2Dand 3D graphics on SGI's high performance workstations. This library was aboutto evolve into one of the most important computer graphics developments fromthe 1990s.

In the early 1990s, SGI was the market leader in 3D graphics workstationsbecause of their high performance hardware and easy to use software. IRIS GLwas the de facto industry standard 3D graphics library, overshadowing all otherdevelopments and attempts to standardize a 3D graphics interface. But despiteits popularity, IRIS GL had one major problem: it was a proprietary systemfused to SGI's own platforms, and competitors were closing in on SGI'sadvantage with their own APIs (Application Programming Interface).

OpenGL Logo

In a bold move, SGI cleaned up IRIS GL, removed all functionality that did notrelate to computer graphics and released it to the public in 1992 as OpenGL(Open Graphics Library), a cross-platform standardized API for real-timecomputer graphics.

Software vendors would have to provide their own implementations of the OpenGLstandard on their platforms, and hardware vendors programs that allowed OpenGLto talk to the underlying graphics hardware called "device drivers." SGIalready provided this to their customers together with a few high-level APIswhile other vendors caught up with this new and easy to use API.

Flexibility

Since SGI did not provide any actual source-code, but merely a specification ofhow the API should work. An abstraction presented itself that allowed hardwareand software vendors great freedom on how they chose to implement OpenGL; thislevel of abstraction is still present today. Because of this, OpenGL issupported across many platforms and devices; in fact, you will be hard-pressedto find a modern platform without at least some level of OpenGL support.

But perhaps the greatest advantage that OpenGL provides to implementers is itssupport for extensions. If the OpenGL specification does not provide supportfor specific functionality, the hardware or software vendor may decide to addthis functionality themselves through the use of extensions. Many vendorschoose to do this and their extensions can be distinguished by their prefixes,e.g.NV_ for NVIDIA,AGL_ for Apple, and so on. Extensions can providepowerful functionality, but are usually specific to the vendor's implementationof OpenGL.

You can then call the functionality provided by these extensions by loadingthem in you program though an extension loading mechanism that retrieves afunction pointer. This loading mechanism is however not standardized, so sadlyeach platform has its own specific extension loading functions. This limitationis most apparent on the Microsoft Windows platform where the OpenGL headerfiles have not been updated since OpenGL version 1.1, even in the latestWindows development kits. There will be more details about why this is so,later in this chapter.

An Open Standard

The name "OpenGL" was not just chosen because it sounded like a fine buzzword,it also contains some actual meaning. Since OpenGL is an evolvingspecification, someone has to decide what goes in it. So in 1992, the ARB(OpenGL Architecture Review Board) was founded, which comprised of several highprofile software and hardware vendors who collectively decided the future ofthe OpenGL standard through a voting system. Besides determining what newfeatures went into the OpenGL specification, it also decided which extensionswould be promoted to become core features of the next OpenGL release.

Although anyone was free to develop an implementation of OpenGL, for it to berecognized as a true OpenGL implementation, the ARB had to approve it throughconformance testing. These tests verify any claims made by the implementer ofcompatibility with a specific OpenGL version through rigorous testing procedures.

FYI: Sample Conformance Tests

To see some of the conformance testing output, visitMesa3D.org and click on the Conformance Testing header.

OpenGL quickly became the industry leading real-time graphics API, as it wasbasically the only one available on multiple platforms.

OpenGL on Windows

OpenGL was already being implemented on UNIX based workstations when Microsoftentered the market with their workstation operating system, Windows NT in 1993.Windows NT was released as a direct competitor to UNIX with networking (the NTacronym meaning Network Technology) and 32-bit hardware support. Windows NTintroduced features that are still being used today, such as the Win32 API forcreating Windows applications. But having no 3D graphics library native totheir system, Microsoft pledged to add support for OpenGL to Windows NT.

Windows NT 3.5 was the first version of Windows to support OpenGL -- barely.

Microsoft finally got around to implementing OpenGL in Windows NT 3.5, whichwas released in 1994, but implemented OpenGL only to the point that they couldclaim compatibility by implementing the sample implementation provided by SGIat the time. This sample implementation was meant to serve only as ademonstration of how one could implement OpenGL and as a guideline to vendors.Needless to say, this implementation was dreadfully slow since it was notoptimized and graphics accelerators for the PC were virtually nonexistent.In fact, this performance issue was so apparent that there is a Microsoftknowledge base article (KB121282)warning that using NT 3.5's OpenGL screensavers may slow your machine downsince it took significant time from the computer's CPU.

DirectX

The old DirectX Logo.

Seeing a market opportunity in video games, Microsoft sought their own 3Dgraphics API for Windows to entice game developers to leave DOS behind(Microsoft's first operating system) and develop games purely for Windows.Their first attempt at this was WinG, which simply passed commands to theunderlying GDI (Graphics Device Interface) interface and offered no 3Dfunctionality. For this, Microsoft had to acquire a company namedRenderMorphics in 1995, who produced a 3D graphics API called Reality Lab. ThisAPI was renamed and shipped as Direct3D in an SDK (Software Development Kit)called DirectX that bundled a few other game development-specific APIs as well:

The first versions of Direct3D were uncomfortable to work with and developerswere slow to adopt the API. This caused Microsoft to keep supporting OpenGLwhile simultaneously putting much effort into making Direct3D a competitiveAPI. The OpenGL 1.1 specification was implemented into Windows 95 and WindowsNT 4.0, and it came with a much needed performance boost although it was thelast time that Microsoft's implementation of OpenGL would be updated in favorof their own API.

The Beginning of the "API Wars"

John Carmack of id Software popularized OpenGL in video games. Source 5

While Microsoft had always insisted that OpenGL was best used for "professionalgraphics," meaning CAD (Computer Aided Design) on the workstation, it wasstarting to see adoption in video games - an industry that Microsoft sought todominate on the Windows platform with DirectX. OpenGL's biggest breakthroughcame when the influential developer John Carmack of id Software ported hisfamous video game Quake to use the OpenGL API on Windows and showed developershow easy it was to do so.

In December of 1996, Carmack released a document that outlined his grievanceswith the Direct3D API. He outlined the differences between the APIs bycomparing the code required by both APIs to draw a triangle to the screen;OpenGL required only four lines of code in his samples, while Direct3D requireda plethora of commands and assignations.

Carmack's blunt way of explaining things was so damaging to Direct3D'sreputation that Direct3D developer Alex St. John posted a follow up in Februaryof 1997 defending his API and strangely enough for Microsoft admitting itsflaws. He explained that the Direct3D was designed with direct access tohardware in mind rather than software and that the resulting interface may nothave been pretty, but it got the job done. And once again, Microsoft pushed thepoint that OpenGL was a CAD library and wouldn't be supported on consumer typehardware anytime soon.

This response rattled SGI's cage, and in June of 1997, they came out with theirown response to St. John's critique. This lengthy document outlined "some ofthe most notable deficiencies of the design and current implementation ofDirect3D." It noted the differences between the two APIs in technical termsrather than marketing ones and elaborated on Carmack's mention of ease-of-usewith more code samples.

FYI: The Back-and-Forth
  1. John Carmack's .plan file on OpenGL.
  2. Alex St. John's write-up on Direct3D, OpenGL, and John Carmack.
  3. The SGI document mentioned above is now only available through Archive.org.

For even more reading on this and the Microsoft culture at the time and Alex St. John, pick up the excellent bookRenegades of the Empire or check out his blog atAlexStJohn.com where he occasionally talks about his time at Microsoft as well as the still ongoing OpenGL vs. Direct3D debate.

Direct3D became more usable with version 5.0, which removed some of theuncomfortable features from the API. At this point, both APIs were quiteuser-friendly and similar in feature set, but this status quo was about tochange.

Driver Debacle

With OpenGL and Direct3D being at a stalemate on Windows NT, Microsoft neededan edge. OpenGL drivers were implemented in Windows NT by using a Mini-ClientDriver (MCD), which was a low performing compromise between hardware andsoftware, but the easiest solution for creating drivers. The MCD allowedvendors to pick and choose pieces that they wanted to accelerate on theirhardware while the rest would run on the provided software implementation (orvice versa). Microsoft obtained the edge they needed by not allowing thelicensing of MCDs on Windows 95, thus effectively limiting OpenGL only to thesoftware implementation provided by Microsoft.

This decision was a massive blowback for OpenGL that was trying to find a wayinto the consumer market, not to mention hardware vendors who had beenimplementing these drivers for months. Thankfully, SGI provided a solutionthat would bring hardware drivers to Windows 95 called the Installable ClientDriver (ICD). In fact, this type of implementation was several magnitudesfaster than the MCD driver model, so it proved to be quite a blessing indisguise. Hardware vendors jumped on this opportunity and quickly begansupplying drivers. Not long after, game developers started implementingOpenGL into their games, proving it was once again a viable alternativeto Direct3D.

Hardware Evolution

In the late 1990s, OpenGL established itself as an industry standard for 3Dcomputer graphics, not just for CAD programs, although it was the onlycontender in that market. PC video games such as Quake 2, Unreal, andHalf-Life took full advantage of OpenGL to show off their full potential andwere widely popular. Around this time, the first consumer-grade dedicated 3Dgraphics hardware started to appear, changing the video game industry forever.

Old 3Dfx Logo.

One of the first 3D accelerators was the Voodoo Graphics by 3Dfx Interactive, ahigh performance add-on card that set the standard as soon as it hit the market.While there were other add-on cards such as the ATI 3D Rage and the S3 ViRGE,the 3Dfx card blew them all out of the water both performance and feature wise.On top of that, 3Dfx provided its own 3D graphics API called Glide, which haddirect access to the underlying graphics hardware. At the time, Glide was thefastest API in town, but due to the fact that it was vendor-specific, it wasmade obsolete by competing APIs only a few years later. Nevertheless, Glidemade its impact on the industry, causing the other APIs to play catch-up forquite a while.

Old NVIDIA Logo.

NVIDIA soon caught up in 1999 with their GeForce 256 add-on card that theytermed GPU (Graphics Processing Unit), which supported a brand new technologycalled Transform & Lighting (commonly referred to as T&L). T&L moved the vertextransformation calculations and lighting calculations from the computer's CPUto the GPU. The main advantage of a GPU is that it does floating-pointoperations very quickly since the hardware is dedicated to this task, while aCPU specializes in integer and more general-purpose operations. 3Dfx neverimplemented T&L, which eventually contributed to their demise as more softwarefunctionality was being moved over to the GPU.

After 3Dfx went bankrupt, NVIDIA acquired much of its intellectual property(including the well-known SLI technology) but did not continue the Voodooproduct line, nor did they support any of 3Dfx's old products. By the year2000, the only competitors remaining in the GPU markets were NVIDIA withGeForce 2, and ATI with their Radeon 7000 series of GPUs. These two vendorsonly offered support for OpenGL and Direct3D, clearing the playing field forthese APIs to go head to head.

FYI: Software vs. Hardware

When we say something is software functionality, it means that it is executed on the CPU instead of the GPU. When we say something is hardware functionality, it means that the feature is executed on dedicated hardware. For example, software rendering is done solely through the CPU and hardware rendering is done solely on the GPU.

Paradigm Shifts

During the early 2000s, GPU performance grew exponentially as more softwarefeatures were moved to the GPU. The CPU became obsolete for rendering real-time3D graphics since it could not keep up with GPU developments. In fact, thecurrent method of rendering 3D graphics saw the CPU as such major bottleneckthat new methods had to be invented to circumvent its use.

Buffers

To get things to render on the screen up until this point, programmers issuedlists of commands from their program that would be interpreted by the GPU,called the immediate mode. This methodology performed fine for smaller datasets, but with larger data sets, performance was hindered by the performance ofthe CPU since all function calls originated from the program itself.

The new method came in the form of buffer objects. Buffers had been around fora while in the form of display lists and vertex arrays, but they each had theirdrawbacks. Display lists still used the immediate mode, and vertex arrays werestored in the system's memory so they had to be transferred to the GPU everysingle call.

Instead, the new buffer objects would be stored in the GPU's memory afterinitialization and would stay there until they were no longer needed. InOpenGL, these objects are called Vertex Buffer Objects (VBOs), and inDirect3D, they are called Vertex Buffers. You will learn much about VBOs in afuture chapter, which will introduce you to VBOs and why exactly they are sofast.

FYI: Buffers

In computer science, a buffer is a place in memory where temporary data is stored. When we are done using this data, the buffer is deleted and the memory is ready for reuse.

Shaders

In the year 2000, Microsoft released Direct3D 8.0, which supported a newfeature called shaders. Shaders are basically nothing more than little programsthat run directly on the GPU, thus leveraging even more of the GPU's power andmoving more functionality away from the CPU. When Direct3D 8.0 was released,two types of shaders were announced, namely vertex shaders and pixel shaders.

A vertex shader is a GPU program that is executed once per vertex that isassigned to, and a pixel shader is a GPU program that is executed once perpixel. Shaders allowed for greater programmability and performance byeliminating the CPU for many tasks, but they were very hard to program due totheir syntax, which resembled the Assembly programming language for the CPU.

Microsoft recognised this shortcoming, and in 2003, a major breakthrough inshaders came in the form of the High-Level Shader Language (HLSL) with therelease of Direct3D 9.0. This new language allowed the manipulation of shadersin a high-level language whose syntax was based on C. At this point, shadersbecame much more viable to use and adoption was widespread. You'll learn muchmore about vertex and pixel shaders in future chapters, where we'll go in-depthabout how all of this is achieved and how you can use shaders in your own programs.

OpenGL Stagnates

The above paragraph makes no mention of OpenGL for a good reason, namely thatOpenGL didn't support any shaders at the time. OpenGL did not officiallysupport shaders until the year 2004 with the release of OpenGL 2.0 and thesimultaneous release of the OpenGL Shading Language (GLSL). Even though theextensions to use shaders were widely available before 2004, they were not partof the official specification and took several years to implement properly.

OpenGL had fallen behind Direct3D dramatically in terms of core features. Justas Direct3D had been playing catch-up in the late 90's, OpenGL now had tocatch-up to Direct3D, which didn't happen. From 2004 to 2006, Direct3D 9.0 wasdominating the market and only a few games were released with OpenGL support.Support for Direct3D was increased even more so when the Xbox 360 was releasedin 2005, with support for Direct3D 9.0. In the meantime, there was no newswhatsoever from the ARB, and it seemed like OpenGL was truly dead for a while.

In 2006, OpenGL 2.1 was released as a minor increment to the original 2.0specification, and brought only a handful of new features. To add insult toinjury, Microsoft released Direct3D 10.0 alongside their new Windows Vistaoperating system, which included a major API overhaul and many new features.Hardware started to move in a new direction, away from the immediate-mode,fixed-function methodology to a more programmable direction, which OpenGL lacked.

Meanwhile, OpenGL developer community was getting restless and demanded ananswer from the ARB or SGI, but what they got was something entirely different.

The New OpenGL

It was announced at the 2006 SIGGRAPH that OpenGL would be managed by theKhronos Group in the future instead of SGI, who still owned OpenGL and all theassociated copyrights, but it would no longer manage it. The Khronos Group is aconsortium of hardware and software vendors with a vested interest in OpenGLthat focuses on the creation and maintenance of open standards APIs, mostnotably before they acquired OpenGL, the COLLADA file format for 3D content.Finally, there was some news from the OpenGL front, and for the first time intwo years, there was rumour of a brand new version of the OpenGL API thatwould bring some major changes.

Longs Peak and Mt. Evans

Two new working versions of OpenGL were announced under the temporarycodenames "Longs Peak" and "Mt. Evans" after mountains in Colorado. Longs Peakwould be the first specification released in the summer of 2007, and Mt. Evansa few months later in October of that year.

These revisions promised a brand new API that was able to compete with theDirect3D 10 API, and much like Direct3D 10 had done, Mt. Evans would eradicateimmediate mode rendering and rely solely on buffers and shaders. This APIrewrite was a grand undertaking and required several Technical Sub-Groups, orTSGs, that focused on their own specialized area of the OpenGL specification.

One of these was the Object Model TSG, which dealt with how buffers and othertypes of objects would be represented in the new API. The proposed Object Modelproposed a wonderful new method for creating objects through only a fewfunction calls. Above all else, the methods used would be consistent for alltypes of objects through the use of templates. This meant that there would beno more discrepancies between vendors who would each provide their objectcreation functionality in a myriad of ways.

Longs Peak would be an API compatible with the hardware of the time andpreserve backwards compatibility with older versions of OpenGL, while Mt. Evanswould eliminate backwards compatibility and take a future-forward stance. Thistogether with the Object Model became the staple of the proposed API, drummingup much anticipation from the OpenGL community.

But the summer of 2007 came and went by without word from Khronos, and onOctober 30th, word came out that the new specification was delayed. Thecommunity was a bit disgruntled, but the overall consensus was that the newspecification was worth the wait, and another year went by.

OpenGL 3.0

It had been two years since the minor release of OpenGL 2.1, and four yearssince the last major release when OpenGL 3.0 was released in July of 2008.Reading the specification, it didn't take long to notice that this wasnotLongs Peak. In fact, it looked like not much had changed at all: the immediatemode was still there, the proposed object model was missing, and there were noplans to include it in any future releases of OpenGL. Some new features wereintroduced together with something called the deprecation model.

The deprecation model tagged all of the immediate mode functionality asdeprecated in favour of methods that are more modern. However, there were noplans to remove any of the deprecated functionality leaving the OpenGL 3.0specification a fully backwards compatible API with all of the cripplingfeatures of the past.

The community was outraged and protested very vocally on the OpenGL communitymessage boards and elsewhere online. Unfounded accusations were made thatOpenGL remained backward compatible because Khronos didn't want to lose theirCAD customers who still used the immediate mode and refused to move on. ManyWindows developers started to leave OpenGL behind in favour of Direct3D,including several of the accused CAD software developers. If the future lookedbleak for OpenGL before, it certainly seemed as if OpenGL had lost the APIwars once and for all with this release.

After the Shock

But after the initial disappointment had passed, the new specification provedto have some qualities that Direct3D didn't posses. For instance, OpenGL 3.0contained many of Direct3D 10's features, but was able to access them onWindows XP whereas Direct3D 10 required Windows Vista to function because ofa new driver model.

About a year later in March of 2009, OpenGL 3.1 was released, which finallyremoved all of the immediate mode functionality from the OpenGL specificationthat 3.0 had marked as deprecated, thus bringing it one step closer to the APIpromised in Longs Peak and Mt. Evans. With this fast release, OpenGL wasfinally back on the right path and only a few months later, OpenGL 3.2 wasreleased, bringing the API up to par with Direct3D 10 by including GeometryShaders. You'll read more about geometry shaders in a future chapter.

Deprecation, Core, and Compatibility

The OpenGL deprecation model can prove to be a bit confusing, so in thissection I'll try to explain the associated terms once and for all. OpenGL 3.0introduced a feature called deprecation that "marked" old and unwanted OpenGLfunctionality and warned that these features may be removed in futurespecifications; basically, using deprecated features in your program is not agood idea moving forward.

When a developer wants to use OpenGL in their programs, they need to create aso-called context, which is basically nothing more than an object that allowsdevelopers to pass commands to an OpenGL device. In the past, these contextswere all created in the same manner regardless of the implemented OpenGLversion.

This meant that when you created a device that used OpenGL 1.5 features, andthe driver returned an OpenGL 2.0 device, it would not be a harmful since allcontext were fully backwards compatible. OpenGL 3.0 introduced a new method ofcreating contexts asking of the following parameters at context creation:

In the case of an OpenGL 3.0 context, the corresponding values would be 3, 0,and some combination of attributes. This new functionality provides a guaranteethat the device returned is the requested OpenGL device or nothing, which meansthat the version is unsupported. The attributes that can be passed make theselection even more granular with one of the following:

The difference between the two profiles is that the Core Profile does notinclude any of the features that were removed in previous versions, and theCompatibility Profile does include them. An OpenGL implementation is alwaysguaranteed to contain the Core Profile of the specification, but not always aCompatibility profile. Moving forward with a Core Profile is the most logicalthing to do, and the intention of this type of context creation.

If you set the Forward Compatible flag, the context that is returned will notcontain any of the features that were deprecated in the version that yourequested, thus making it compatible with future versions that may have removedthese features.

If you set the Debug flag, a debug context will be returned that will includeadditional checking, validation, and other functionality that can be usefulduring the development cycle.

The above flags can all be combined together, with exception of the Core andCompatibility, since only one of these can be returned. Combining flags is doneusing the bitwiseOR operator, the pipe symbol (|) in C.

If these terms are still a bit fuzzy to you, don't worry, we'll go over themagain in the appendices where we set up OpenGL contexts from scratch onspecific platforms. For now, remember that there are two types of OpenGLprofiles, Core, the more modern profile, and Compatibility, the profile that iscompatible with older OpenGL functionality.

OpenGL 4.0

A year after the release of OpenGL 3.2, OpenGL 4.0 was released as an API forthe latest generation of GPUs similar to Direct3D 11. Simultaneously, OpenGL3.3 was released implementing as many features from OpenGL 4.0 as possiblewhile remaining compatible with previous generation hardware.

An important new feature in OpenGL 4.0 is called Tessellation, which allows forfine-grained control of surfaces and automated levels of detail in your scenes.We will explore what Tessellation is and how to use it in a future chapter.

This book uses the OpenGL 4.0 specification with the Core Profileenabled, which means that it is not compatible with any previous versions ofthe OpenGL API. We are also not going to use any deprecated features in orderto stay as future-compatible as possible. This may seem a bit daunting atfirst, but rest assured that it's not as difficult to learn as it may seem,and not learning old features will be beneficial in the long run.

If you already know some immediate mode OpenGL, please be aware that we are notgoing to cover this type of OpenGL. You will not find commands such asglBegin,glEnd,glVertex3f, andglColor3f in this book (besides thesereferences) since they are not present in the OpenGL 4.0 Core Profile. Forgeteverything that you've learned about immediate mode OpenGL up to this point andthat the immediate mode API ever existed, since it is never coming back.

It's a great time to start learning OpenGL since there is a major move of videogames being ported over to platforms other than Microsoft Windows, and the onlyway you'll get real-time computer graphics on platforms other than Windows isthrough OpenGL. For instance, Half-Life developer Valve has ported many oftheir popular games over to the Apple Macintosh, using OpenGL as their graphicslibrary.

In addition, modern smart-phones such as the iPhone and Android-based phonesall useOpenGL ES for interactive 3D graphics, which is an API for embeddedsystems based on, and very similar to OpenGL. This means that potentially, yourcode could be portable enough to run on PCs, Macs, consoles, as well as onvarious mobile devices.

OpenGL ES itself has spawned off yet another API calledWebGL, across-browser and cross-platform compatible 3D graphics API for the web browserthat is gaining more traction by the day. This library has the potential tomove the web as well as multi-user applications to a whole new level.

All in all, OpenGL is far from dead and thriving as a full-featured modern 3Dgraphics API.

The Software Pipeline

In rendering real-time computer graphics, the software pipeline exists todescribe what we'd like to see on the screen. For example, if we'd like todisplay a green square on the screen, computer software would describe withwhich dimensions, color, and at which position of the screen to draw thesquare.

The software pipeline also provides access to functionality that draws thegeometry onto the screen. It is worth noting that the software pipeline doesnot actually do any drawing or transformations, since on modern systems thisfunctionality is entirely implemented by the hardware.

The Software Pipeline.

The software pipeline consists of several different layers, each with their ownvery specific purposes. Each of these layers may contain a myriad offunctionality, but for the sake of brevity and clarity, we'll only discuss thehigh-level functions of each layer.

The first is theApplication layer, which is your program, the program thatinvokes drawing commands. The application serves as a controller of the overallprocess and oversees all of the user-level operations such as creating windows,threads, memory allocation, complex user data-types, and making calls toexternal libraries such as OpenGL or Direct3D through their respective interfaces.

The next layer is theAbstraction layer, which contains the OpenGL orDirect3D API implementations. It is important to make the distinction betweenthe API in the Application layer and the implementations of OpenGL and Direct3Din the abstraction layer. To put it in C terms: you can think of theApplication layer as the header file containing only definitions, while theAbstraction layer is the source file containing the actual functionality. TheAbstraction layer serves as a dispatch to the next layer by implementinghardware-level functionality in a usable and standardized format.

The abstraction layer passes its commands to theDevice Driver, a softwarecommunication layer to the hardware. This layer is entirely invisible to thedeveloper, since it cannot be interacted with through your program. Like manyinvisible things (remember the bacteria), the device driver is one of the mostimportant parts of the Software Pipeline, since it connects all of theindividual pieces together. Because of the amount of specialized functionalitythat the Device Driver encompasses, it can be quite large in file size.

The device driver interprets the commands passed to it by the Abstraction layerand relays them to the underlying device in a format that the hardware canunderstand and easily process.

In essence, the software pipeline serves as a relay from your program to thededicated hardware. While very important, the only part of the softwarepipeline that you will actually use in your programs is the Application layer,which exposes the APIs to you.

The hardware pipeline will be explained in-depth in the another chapter, whenwe will actually set up an OpenGL context and a window for rendering purposes.

Requirements

Now that we're ready to start, we have to make sure that you're prepared forprogramming with OpenGL 4.0. Naturally, you need basic knowledge of the Cprogramming language and know how to link to a library with your compiler.

If you don't know the C programming language, there are many books that willteach you how to use the C programming language in a very easy to followmanner. In fact, C is not a very difficult language to learn. I don't expectyou to master it in order to use the book, but you will need to understandthe basics such as pointers, data structures, and functions. So, once you'velearned some C, or while you're learning it, you are very welcome to return tothe book and resume from this point.

If you don't know how to link to a library, you can consult your compiler'sdocumentation. Linking a library is usually nothing more than setting a fewcommand line parameters or options in a GUI.

Last but not least, some basic math skills are required since we will behandling matrices, vectors, and geometry. The samples in the book will not beexcessively heavy on math, but eventually, some math will be required.

System Requirements

To be able to run the examples in this book, your GPU must support OpenGL 4.0in its entirety.

If you're not sure about your GPU, refer to the manufacturer's website for moreinformation and the latest device drivers.

Finally, if your GPU does not support OpenGL 4.0, you will not be able to runthe samples provided in this book, at which point you must upgrade yourhardware to be able to continue. However, many of the concepts discussed areavailable in earlier versions of OpenGL, you just will not be able to copy thecode samples without rewriting them for your supported version. Also, pleasemake sure that you have the latest device drivers for your graphics hardwaresince they usually contain many bug fixes and performance improvements.

Software Requirements

I have tried to mention only easy to acquire, free and Open Source software inthe following sections so that you can get started with OpenGL quickly andanywhere you have an internet connection. The main thought behind OpenGL isportability, so the code mentioned in the book will be portable across multipleplatforms as well.

What this all means for you is if you ever decide to use another operatingsystem, or simply wish to develop OpenGL programs for other operating systems,the samples in this book will still work. Platform specific samples can be foundAppendix A, but I cannot recommend them if you are not familiar with OpenGL, C,and your operating system's API.

The most important piece of software that you will need for this book is a Ccompiler which will transform the C code from its human readable text formatinto a binary format that can be interpreted and executed by your machine.

If you useMicrosoft Windows, you can get a free version of Visual C++ fromMicrosoft's website called Visual C++ Express. This program contains a modernfull-featured C and C++ development environment, based on the more completeVisual Studio:http://www.microsoft.com/express/windows/

If you useLinux, chances are that you already have a C compiler on yoursystem, called GCC (the GNU Compiler Collection). This is only a compiler,meaning that you will have to compile your programs through a command lineinterface instead of a development environment.http://gcc.gnu.org/

Another option for compiling C and C++ code on Linux is the Intel C++ compiler,which is available for free for non-commercial usage from the Intel site:http://software.intel.com/en-us/intel-compilers/

If you're on aMacintosh, you can use Xcode, which is a free download fromthe App Store. Like Visual C++, Xcode is a full-fledged developmentenvironment, but focused on Objective-C, instead of C and C++.https://itunes.apple.com/us/app/xcode/id497799835

If you're on any other platform, please consult your operating system's vendorfor details on where to acquire a C compiler, or use a search engine to findout for yourself.

IDEs (Integrated Development Environments) are a great way to organize yourprojects and can help you code faster. A good development environment for manyoperating systems is Eclipse, which supports C and C++, as well as a severalother programming languages. You can get the Eclipse development environmentfor free ateclipse.org

Of course, you don'thave to use an IDE, there are many text editors outthere that are great for editing C code such asVim,Emacs,Notepad++,orSublime Text.

Besides a compiler, make sure that you have support for OpenGL by referring tothe compiler's documentation, or checking the include directories for theheader filegl.h, and the library directories for the OpenGL library, usuallynamedopengl.lib, oropengl.so, depending on your platform. If you can'tfind these files, or if you're just not sure, refer to your compilerdocumentation or vendor website for support.

Other than a C compiler and OpenGL support, there are three additionallibraries that are required for the examples in this book.

FreeGLUT

Since OpenGL is merely a graphics library, window and context creation must behandled by an external library, usually provided by the operating system. Butsince you could be using a different operating system than me, a cross-platformlibrary must be used, and FreeGLUT is an Open Source library that does exactlythat.

Modeled after the long abandoned but still popular GLUT library (OpenGL UtilityToolkit), FreeGLUT provides a modern Open Sourced alternative that is easy touse, cross platform compatible, and suitable for creating demonstrativeprograms such as the ones in this book.

You can obtain a copy of FreeGLUT atfreeglut.sf.net,please make sure that you get version 2.6.0 or higher in order to be able tocreate an OpenGL 4.0 context. FreeGLUT is licensed under the X-Consortiumlicense, meaning that you can use it in any program even if it's proprietary.

GLEW

Loading extensions can be quite a platform-dependent hassle, so the nextlibrary you will need is the GLEW library (OpenGL Extension Wrangler), whichmakes it a breeze to use OpenGL extensions in your programs. If you worry thatwe're skipping an important part of OpenGL by using this 3rd party library,Appendix A will explore extension loading in detail for several platforms.

You can get a copy of the GLEW library atglew.sf.netfor free, as it is also Open Source and licensed under several unrestrictedlicenses that allow you to use GLEW in any code base, similar to the licensethat FreeGLUT uses. Please make sure to get version 1.5.4 or higher for OpenGL4.0 support.

Setting up Guides

Setting up OpenGL, GLEW, and FreeGLUT in Visual C++

Other Libraries

Any other library dependencies will be listed at the beginning of each chapter.

Conclusion

Now that we've covered most of the background on computer graphics and OpenGL,and know the requirement for continuing, we're ready to get our hands dirty andset up an OpenGL rendering context in thenext chapter.

Media Sources and Attribution

1Source,License: Creative Commons Attribution-Share Alike 3.0 Unported

2Source,License: Creative Commons Attribution-ShareAlike 2.0 Generic

3Source,License: Creative Commons Attribution 3.0 Unported

4Source,License: Creative Commons Attribution 2.0 Generic

5Source,License: Creative Commons Attribution-Share Alike 2.0 Generic

Creative Commons License

If you were charged for a copy of this text,demand a refund. This work is available free of charge onopenglbook.com

Fork us on GitHub

Fork me on GitHub
[8]ページ先頭

©2009-2026 Movatter.jp