This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages) (Learn how and when to remove this message)
|
| History of computing |
|---|
| Hardware |
| Software |
| Computer science |
| Modern concepts |
| By country |
| Timeline of computing |
| Glossary of computer science |
Thehistory offree and open-source software begins at theadvent of computer software in the early half of the 20th century. In the 1950s and 1960s, computer operating software andcompilers were delivered as a part ofhardware purchases without separate fees. At the time,source code—the human-readable form of software—was generally distributed with the software, providing the ability to fix bugs or add new functions. Universities were early adopters of computing technology. Many of the modifications developed by universities were openly shared, in keeping with the academic principles ofsharing knowledge, and organizations sprung up to facilitate sharing.
As large-scaleoperating systems matured, fewer organizations allowed modifications to the operating software, and eventually such operating systems were closed to modification. However, utilities and other added-function applications are still shared and new organizations have been formed to promote the sharing of software.
The concept of free sharing of technological information existed long before computers. For example, in the early years of automobile development, one enterprise owned the rights to a2-cycle gasoline engine patent originally filed byGeorge B. Selden.[1] By controlling this patent, they were able to monopolize the industry and force car manufacturers to adhere to their demands, or risk a lawsuit. In 1911, independent automakerHenry Ford won a challenge to the Selden patent. The result was that the Selden patent became virtually worthless and a new association (which would eventually become theMotor Vehicle Manufacturers Association) was formed.[1] The new association instituted a cross-licensing agreement among all US auto manufacturers: although each company would develop technology and file patents, these patents were shared openly and without the exchange of money between all the manufacturers.[1] By the time the US entered World War 2, 92 Ford patents and 515 patents from other companies were being shared between these manufacturers, without any exchange of money (or lawsuits).[1][improper synthesis?]
Computer software was created in the early half of the 20th century.[2][3][4] In the 1950s and into the 1960s, almost all softwares were produced by academics and corporate researchers working in collaboration,[5] often shared aspublic-domain software. As such, it was generally distributed under the principles ofopenness and cooperation long established in the fields of academia, and was not seen as a commodity in itself. Such communal behavior later became a central element of the so-calledhacking culture (a term with a positive connotation among free software programmers).
Computer operating software and compilers were delivered as a part of hardware purchases without separate fees. At this time,source code, the human-readable form of software, was generally distributed with the softwaremachine code because users frequently modified the software themselves, because it would not run on different hardware or OS without modification, and also to fix bugs or add new functions.[6][7][failed verification][8] The first example offree and open-source software is believed to be theA-2 system, developed at theUNIVAC division of Remington Rand in 1953,[9] which was released to customers with its source code. They were invited to send their improvements back to UNIVAC.[10] Later, almost allIBMmainframe software was also distributed with source code included. User groups such as that of theIBM 701, calledSHARE, and that ofDigital Equipment Corporation (DEC), calledDECUS, were formed to facilitate the exchange of software. TheSHARE Operating System, originally developed byGeneral Motors, was distributed by SHARE for theIBM 709 and7090 computers. Some university computer labs even had a policy requiring that all programs installed on the computer had to come with published source-code files.[11]
In 1969 theAdvanced Research Projects Agency Network (ARPANET), a transcontinental, high-speed computer network was constructed. The network (later succeeded by the Internet) simplified the exchange of software code.[6]
Some free software which was developed in the 1970s continues to be developed and used, such asTeX (developed byDonald Knuth)[12] andSPICE.[13]
By the late 1960s change was coming: asoperating systems andprogramming languagecompilers evolved, software production costs were dramatically increasing relative to hardware. A growing software industry was competing with the hardware manufacturers' bundled software products (the cost of bundled products was included in the hardware cost), leased machines required software support while providing no revenue for software, and some customers, able to better meet their own needs,[14] did not want the costs of the manufacturer's software to be bundled with hardware product costs. In theUnited States vs. IBM antitrust suit, filed 17 January 1969, the U.S. government charged that bundled software was anticompetitive.[15] While some software continued to come at no cost, there was a growing amount of software that was for sale only under restrictive licenses.
In the early 1970s AT&T distributed early versions ofUnix at no cost to the government and academic researchers, but these versions did not come with permission to redistribute or to distribute modified versions, and were thus notfree software in the modern meaning of the phrase. After Unix became more widespread in the early 1980s, AT&T stopped the free distribution and charged for system patches. As it is quite difficult to switch to another architecture, most researchers paid for a commercial license.
Software was not considered copyrightable before the 1974 US Commission on New Technological Uses of Copyrighted Works (CONTU) decided that "computer programs, to the extent that they embody an author's original creation, are proper subject matter of copyright".[16][17] Therefore, software had no licenses attached and was shared aspublic-domain software, typically with source code. The CONTU decision plus later court decisions such asApple v. Franklin in 1983 forobject code, gave computer programs the copyright status of literary works and started thelicensing of software and the shrink-wrapclosed source software business model.[18]
In the late 1970s and early 1980s, computer vendors and software-only companies began routinely charging forsoftware licenses, marketing software as "Program Products" and imposing legal restrictions on new software developments, now seen as assets, throughcopyrights, trademarks, and leasing contracts. In 1976Bill Gates wrote an essay entitled "Open Letter to Hobbyists", in which he expressed dismay at the widespread sharing of Microsoft's productAltair BASIC by hobbyists without paying its licensing fee. In 1979, AT&T began to enforce its licenses when the company decided it might profit by selling the Unix system.[19] In an announcement letter dated 8 February 1983 IBM inaugurated a policy of no longer distributing sources with purchased software.[20][21]
To increase revenues, a general trend began to no longer distributesource code (easily readable by programmers), and only distribute the executablemachine code that was compiled from the source code. One person especially distressed by this new practice wasRichard Stallman. He was concerned that he could no longer study or further modify programs initially written by others. Stallman viewed this practice as ethically wrong. In response, he founded theGNU Project in 1983 so that people could use computers using onlyfree software.[8] He established a non-profit organization, theFree Software Foundation, in 1985, to more formally organize the project. He inventedcopyleft, a legal mechanism to preserve the "free" status of a work subject to copyright, and implemented this in theGNU General Public License. Copyleft licenses allow authors to grant a number of rights to users (including rights to use a work without further charges, and rights to obtain, study and modify the program's complete corresponding source code) but requires derivatives to remain under the same license or one without any additional restrictions. Since derivatives include combinations with other original programs, downstream authors are prevented from turning the initial work into proprietary software, and invited to contribute to the copyleft commons.[6] Later, variations of such licenses were developed by others.
However, there were still those who wished to share their source code with other programmers and/or with users on a free basis, then called "hobbyists" and "hackers".[22] Before the introduction and widespread public use of the internet, there were several alternative ways available to do this, includinglistings in computer magazines (likeDr. Dobb's Journal,Creative Computing,SoftSide,Compute!,Byte, etc.) and in computer programming books, like the bestsellerBASIC Computer Games.[23] Though still copyrighted, annotated source code for key components of the system software forAtari 8-bit computers was published in mass market books, includingThe Atari BASIC Source Book[24] (full source forAtari BASIC) andInside Atari DOS (full source forAtari DOS).[25]
The SHARE users group, founded in 1955, began collecting and distributing free software. The first documented distribution from SHARE was dated 17 October 1955.[26] The "SHARE Program Library Agency" (SPLA) distributed information and software, notably on magnetic tape.
In the early 1980s, the so-calledDECUS tapes[27] were a worldwide system for the transmission of free software for users of DEC equipment. Operating systems were usuallyproprietary software, but many tools like theTECO editor,Runoff text formatter, orList file listing utility, etc. were developed to make users' lives easier, and distributed on the DECUS tapes. These utility packages benefited DEC, which sometimes incorporated them into new releases of their proprietary operating system. Even compilers could be distributed and for exampleRatfor (andRatfiv) helped researchers to move from Fortran coding tostructured programming (suppressing the GO TO statement). The 1981 Decus tape was probably the most innovative, introducing the Lawrence Berkeley LaboratorySoftware Tools Virtual Operating System which permitted users to use a Unix-like system on DEC 16-bitPDP-11s and 32-bitVAXes running under theVMS operating system. It was similar to the currentcygwin system for Windows. Binaries and libraries were often distributed, but users usually preferred to compile from source code.[citation needed]
In the 1980s, parallel to the free software movement, software with source code was shared onBBS networks. This was sometimes a necessity; software written inBASIC and otherinterpreted languages could only be distributed as source code, and much of it was freeware. As users began gathering such source code, and setting up boards specifically to discuss its modification, a de facto open-source system was formed.
One of the most obvious examples of this is one of the most-used BBS systems and networks,WWIV, developed initially in BASIC byWayne Bell. A culture of "modding" his software, and distributing the mods, grew up so extensively that when the software was ported to firstPascal, thenC++, its source code continued to be distributed to registered users, who would share mods and compile their own versions of the software. This may have contributed to it being a dominant system and network, despite being outside theFidonet umbrella that was shared by so many other BBS makers.
Meanwhile, the advent ofUsenet andUUCPNet in the early 1980s further connected the programming community and provided a simpler way for programmers to share their software and contribute to software others had written.[28]
In 1983,Richard Stallman launched theGNU Project to write a complete operating system free from constraints on use of its source code. Particular incidents that motivated this include a case where an annoying printer couldn't be fixed because the source code was withheld from users.[29] Stallman also published theGNU Manifesto in 1985 to outline the GNU Project's purpose and explain the importance of free software. Another probable inspiration for the GNU project and itsmanifesto was a disagreement between Stallman andSymbolics, Inc. over MIT's access to updates Symbolics had made to its Lisp machine, which was based on MIT code.[30] Soon after the launch, he[22] used[clarification needed] the existing term "free software" and founded theFree Software Foundation to promote the concept.The Free Software Definition was published in February 1986.
In 1989, the first version of theGNU General Public License was published.[31] A slightly updated version 2 was published in 1991. In 1989, some GNU developers formed the companyCygnus Solutions.[32] The GNU project's kernel, later called "GNU Hurd", was continually delayed, but most other components were completed by 1991. Some of these, especially theGNU Compiler Collection, had become market leaders[clarification needed] in their own right. TheGNU Debugger andGNU Emacs were also notable successes.
TheLinux kernel, started byLinus Torvalds, was released as freely modifiable source code in 1991. The license was not afree software license, but with version 0.12 in February 1992, Torvalds relicensed the project under theGNU General Public License.[33] Much like Unix, Torvalds' kernel attracted attention from volunteer programmers.
Until this point, the GNU project's lack of a kernel meant that no complete free software operating systems existed. The development of Torvalds' kernel closed that last gap. The combination of the almost-finishedGNU operating system and the Linux kernel made the first complete free software operating system.
AmongLinux distributions,Debian GNU/Linux, begun byIan Murdock in 1993, is noteworthy for being explicitly committed to the GNU and FSF principles of free software. The Debian developers' principles are expressed in theDebian Social Contract. Since its inception, the Debian project has been closely linked with the FSF, and in fact was sponsored by the FSF for a year in 1994–1995. In 1997, former Debian project leaderBruce Perens also helped foundSoftware in the Public Interest, a non-profit funding and support organization for variousfree software projects.[34]
Since 1996, the Linux kernel has included proprietary licensed components, so that it was no longer entirelyfree software.[35] Therefore, theFree Software Foundation Latin America released in 2008 a modified version of the Linux-kernel calledLinux-libre, where all proprietary and non-free components were removed.
Many businesses offer customized Linux-based products, or distributions, with commercial support. The naming remainscontroversial. Referring to the complete system as simply "Linux" is common usage. However, theFree Software Foundation, and many others, advocate the use of the term "GNU/Linux", saying that it is a more accurate name for the whole operating system.[36]
Linux adoption grew among businesses and governments in the 1990s and 2000s. In the English-speaking world at least,Ubuntu and its derivatives became a relatively popular set ofLinux distributions.
When theUSL v. BSDi lawsuit was settled out of court in 1993,FreeBSD andNetBSD (both derived from386BSD) were released as free software. In 1995,OpenBSDforked from NetBSD. In 2004,Dragonfly BSD forked from FreeBSD.
In the mid to late 90s, when many website-based companies were starting up, free software became a popular choice for web servers. TheApache HTTP Server became the most-used web-server software, a title that still holds as of 2015.[37] Systems based on a common "stack" of software with the Linux kernel at the base, Apache providing web services, thedatabase engineMySQL to store data, and theprogramming languagePHP to provide dynamic pages, came to be termedLAMP systems. In actuality, the programming language that predated PHP and dominated the web in the mid and late 1990s wasPerl. Web forms were processed on the server side throughCommon Gateway Interface scripts written in Perl.
The term "open source," as related to free software, was in common use by 1995.[38] Other recollection have it in use during the 1980s.[39]
In 1997,Eric S. Raymond published "The Cathedral and the Bazaar", a reflective analysis of the hacker community and free software principles. The paper received significant attention in early 1998 and was one factor in motivatingNetscape Communications Corporation to release their popularNetscape Communicator Internet suite asfree software.[40]
Netscape's act prompted Raymond and others to look into how to bring free software principles and benefits to the commercial-software industry. They concluded that FSF's social activism was not appealing to companies like Netscape, and looked for a way to rebrand the free software movement to emphasize the business potential of the sharing of source code.[41]
The label "open source" was adopted by some people in thefree software movement at a strategy session[42] held atPalo Alto, California, in reaction toNetscape's January 1998 announcement of a source code release forNavigator. The group of individuals at the session includedChristine Peterson who suggested "open source",[8] Todd Anderson,Larry Augustin,Jon Hall, Sam Ockman,Michael Tiemann, andEric S. Raymond. Over the next week, Raymond and others worked on spreading the word.Linus Torvalds gave an all-important sanction the following day. Phil Hughes offered a pulpit inLinux Journal.Richard Stallman, pioneer of the free software movement, flirted with adopting the term, but changed his mind.[42] Those people who adopted the term used the opportunity before the release of Navigator's source code to free themselves of the ideological and confrontational connotations of the term "free software". Netscape released its source code under theNetscape Public License and later under theMozilla Public License.[43]
The term was given a big boost at an event organized in April 1998 by technology publisherTim O'Reilly. Originally titled the "Freeware Summit" and later named the "Open Source Summit",[44] the event brought together the leaders of many of the most important free and open-source projects, includingLinus Torvalds,Larry Wall,Brian Behlendorf,Eric Allman,Guido van Rossum,Michael Tiemann,Paul Vixie,Jamie Zawinski of Netscape, and Eric Raymond. At that meeting, the confusion caused by the name free software was brought up. Tiemann argued for "sourceware" as a new term, while Raymond argued for "open source". The assembled developers took a vote, and the winner was announced at a press conference that evening. Five days later, Raymond made the first public call to the free software community to adopt the new term.[45] TheOpen Source Initiative was formed shortly thereafter.[8][42] According to the OSI Richard Stallman initially flirted with the idea of adopting the open source term.[46] But as the enormous success of the open source term buried Stallman's free software term and his message on social values and computer users' freedom,[47][48][49] later Stallman and his FSF strongly objected to the OSI's approach and terminology.[50] Due to Stallman's rejection of the term "open-source software", the FOSS ecosystem is divided in its terminology; see alsoAlternative terms for free software. For example, a 2002 FOSS developer survey revealed that 32.6% associated themselves with OSS, 48% with free software, and 19.4% in between or undecided.[51] Stallman still maintained, however, that users of each term were allies in the fight against proprietary software.
On 13 October 2000,Sun Microsystems released[52] theStarOffice office suite as free software under theGNU Lesser General Public License. The free software version was renamedOpenOffice.org, and coexisted with StarOffice. It was designed as a free and compatible, emulating replacement for Microsoft Office Suite.
By the end of the 1990s, the term "open source" gained much traction in public media[53] and acceptance in software industry in context of thedotcom bubble and the open-source software drivenWeb 2.0.
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(September 2015) (Learn how and when to remove this message) |

TheX Window System was created in 1984, and became the de facto standard window system in desktop free software operating systems by the mid-1990s. X runs as a server, and is responsible for communicating with graphics hardware on behalf of clients (which are individual software applications). It provides useful services such as having multiple virtual desktops for the same monitor, and transmitting visual data across the network so a desktop can be accessed remotely.
Initially, users orsystem administrators assembled their own environments from X and availablewindow managers (which add standard controls to application windows; X itself does not do this),pagers,docks and other software. While X can be operated without a window manager, having one greatly increases convenience and ease of use.
Two key "heavyweight"desktop environments for free software operating systems emerged in the 1990s that were widely adopted:KDE andGNOME.KDE was founded in 1996 byMatthias Ettrich. At the time, he was troubled by the inconsistencies in the user interfaces of UNIXapplications. He proposed a new desktop environment. He also wanted to make this desktop easy to use. His initialUsenet post spurred a lot of interest.[54]
Ettrich chose to use theQt toolkit for the KDE project. At the time, Qt did not use afree software license. Members of the GNU project became concerned with the use of such a toolkit for building a free software desktop environment. In August 1997, two projects were started in response to KDE: theHarmony toolkit (a free replacement for the Qt libraries) and GNOME (a different desktop without Qt and built entirely on top of free software).[55]GTK+ was chosen as the base of GNOME in place of the Qt toolkit.
In November 1998, the Qt toolkit was licensed under the free/open sourceQ Public License (QPL) but debate continued about compatibility with theGNU General Public License (GPL). In September 2000,Trolltech made theUnix version of the Qt libraries available under the GPL, in addition to the QPL, which has eliminated the concerns of theFree Software Foundation. KDE has since been split intoKDE Plasma Workspaces, a desktop environment, andKDE Software Compilation, a much broader set of software that includes the desktop environment.
Both KDE and GNOME now participate infreedesktop.org, an effort launched in 2000 to standardize Unix desktop interoperability, although there is still competition between them.[56]
Since 2000, software written for X almost always uses somewidget toolkit written on top of X, like Qt or GTK.[citation needed]
In 2010,Canonical released the first version ofUnity, a replacement for the prior default desktop environment forUbuntu, GNOME. This change to a new, under-development desktop environment and user interface was initially somewhat controversial among Ubuntu users.
In 2011, GNOME 3 was introduced, which largely discarded thedesktop metaphor in favor of a more mobile-oriented interface. The ensuingcontroversy led Debian to consider making theXfce environment default on Debian 7. Several independent projects were begun to keep maintaining the GNOME 2 code.
Fedora Linux did not adopt Unity, retaining its existing offering of a choice of GNOME, KDE andLXDE with GNOME being the default, and henceRed Hat Enterprise Linux (for which Fedora acts as the "initial testing ground") did not adopt Unity either. A fork of Ubuntu was made by interested third-party developers that kept GNOME and discarded Unity. In March 2017, Ubuntu announced that it will be abandoning Unity in favour of GNOME 3 in future versions, and ceasing its efforts in developingUnity-based smartphones and tablets.[57][58]
When Google built the Linux-basedAndroid operating system, mostly for phone and tablet devices, it replaced X with the purpose-builtSurfaceFlinger.
Open-source developers also criticized X as obsolete, carrying many unused or overly complicated elements in its protocol and libraries, while missing modern functionality, e.g., compositing, screen savers, and functions provided by window managers.[59] Several attempts have been made or are underway to replace X for these reasons, including:
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed. Find sources: "History of free and open-source software" – news ·newspapers ·books ·scholar ·JSTOR(December 2023) (Learn how and when to remove this message) |
As free software became more popular, industry incumbents such asMicrosoft started to see it as a serious threat. This was shown in a leaked 1998 document, confirmed by Microsoft as genuine, which came to be called the first of theHalloween Documents.
Steve Ballmer once compared the GPL to "a cancer", but has since stopped using this analogy. Indeed, Microsoft has softened its public stance towards open source in general, with open source since becoming an important part of theMicrosoft Windows ecosystem.[61]
In 2003, a proprietary Unix vendor and former Linux distribution vendor calledSCO alleged that Unix intellectual property had been inappropriately copied into the Linux kernel, and sued IBM, claiming that it bore responsibility for this. Several related lawsuits and countersuits followed, some originating from SCO, some from others suing SCO. However, SCO's allegations lacked specificity, and while some in the media reported them as credible, many critics of SCO believed the allegations to be highly dubious at best.
Over the course of theSCO v. IBM case, it emerged that not only had SCO been distributing the Linux kernel for years under the GPL, and continued to do so (thus rendering any claims hard to sustain legally), but that SCO did not even own the copyrights to much of the Unix code that it asserted copyright over, and had no right to sue over them on behalf of the presumed owner,Novell.
This was despite SCO's CEO,Darl McBride, having made many wild and damaging claims of inappropriate appropriation to the media, many of which were later shown to be false, or legally irrelevant even if true.
The blogGroklaw was one of the most forensic examiners of SCO's claims and related events, and gained its popularity from covering this material for many years.
SCO suffered defeat after defeat inSCO v. IBM and its various other court cases, and filed forChapter 11 bankruptcy in 2007. However, despite the courts finding that SCO did not own the copyrights (see above), and SCO's lawsuit-happy CEO Darl McBride no longer running the company, the bankruptcy trustee in charge of SCO-in-bankruptcy decided to press on with some portions he claimed remained relevant in theSCO v. IBM lawsuit. He could apparently afford to do this because SCO's main law firm inSCO v. IBM had signed an agreement at the outset to represent SCO for a fixed amount of money no matter how long the case took to complete.
In 2004, theAlexis de Tocqueville Institution (ADTI) announced its intent to publish a book,Samizdat: And Other Issues Regarding the 'Source' of Open Source Code, showing that the Linux kernel was based on code stolen from Unix, in essence using the argument that it was impossible to believe thatLinus Torvalds could produce something as sophisticated as the Linux kernel. The book was never published, after it was widely criticised and ridiculed, including by people supposedly interviewed for the book. It emerged that some of the people were never interviewed, and that ADTI had not tried to contact Linus Torvalds, or ever put the allegations to him to allow a response.Microsoft attempted to draw a line under this incident, stating that it was a "distraction".
Many suspected that some or all of these legal andfear, uncertainty and doubt (FUD) attacks against the Linux kernel were covertly arranged by Microsoft, although this has never been proven. Both ADTI and SCO, however, received funding from Microsoft.
In 2008 theInternational Organization for Standardization published Microsoft'sOffice Open XML as aninternational standard, which crucially meant that it, and thereforeMicrosoft Office, could be used in projects where the use ofopen standards were mandated by law or by policy. Critics of the standardisation process, including some members of ISO national committees involved in the process itself, alleged irregularities and procedural violations in the process, and argued that the ISO should not have approved OOXML as a standard because it made reference to undocumented Microsoft Office behaviour.
As of 2012[update], no correct open-source implementation of OOXML exists, which validates the critics' remarks about OOXML being difficult to implement and underspecified. Presently, Google cannot yet convert Office documents into its own proprietary Google Docs format correctly. This suggests that OOXML is not a true open standard, but rather a partial document describing what Microsoft Office does, and only involving certain file formats.
In 2006 Microsoft launched itsCodePlex open source code hosting site, to provide hosting for open-source developers targeting Microsoft platforms. In July 2009 Microsoft open sourced someHyper-V-supporting patches to the Linux kernel, because they were required to do so by theGNU General Public License,[62][63] and contributed them to the mainline kernel. Note that Hyper-V itself is not open source. Microsoft'sF# compiler, created in 2002, has also been released as open source under theApache license. The F# compiler is a commercial product, as it has been incorporated intoMicrosoft Visual Studio, which is not open source.
Microsoft representatives have made regular appearances at various open source and Linux conferences for many years.
In 2012, Microsoft launched a subsidiary named Microsoft Open Technologies Inc., with the aim of bridging the gap between proprietary Microsoft technologies and non-Microsoft technologies by engaging with open-source standards.[64] This subsidiary was subsequently folded back into Microsoft as Microsoft's position on open source and non-Windows platforms became more favourable.
In January 2016 Microsoft releasedChakra as open source under theMIT License; the code is available onGitHub.[65]
Microsoft's stance on open source has shifted as the company began endorsing more open-source software. In 2016, Steve Balmer, former CEO of Microsoft, has retracted his statement that Linux is amalignant cancer.[66] In 2017, the company became a platinum supporter of theLinux Foundation. By 2018, shortly before acquiring GitHub, Microsoft led the charts in the number of paid staff contributing to open-source projects there.[67]
Critics have noted that, in March 2019, Microsoft sued Foxconn's subsidiary over a 2013 patent contract;[68] in 2013, Microsoft had announced a patent agreement with Foxconn related to Foxconn's use of the Linux-basedAndroid andChromeOS.[69]
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(January 2012) (Learn how and when to remove this message) |
The vast majority of programming languages in use today have a free software implementation available.
Since the 1990s, the release of major new programming languages in the form of open-sourcecompilers and/orinterpreters has been the norm, rather than the exception. Examples includePython in 1991,Ruby in 1995, andScala in 2003. In recent times, the most notable exceptions have beenJava,ActionScript,C#, and Apple'sSwift until version 2.2 wasproprietary. Partly compatible open-source implementations have been developed for most, and in the case of Java, the main open-source implementation is by now very close to the commercial version.
This sectionmay containoriginal research. Pleaseimprove it byverifying the claims made and addinginline citations. Statements consisting only of original research should be removed.(January 2013) (Learn how and when to remove this message) |
Since its first public release in 1996, theJava platform had not been open source, although the Java source code portion of the Java runtime was included inJava Development Kits (JDKs), on a purportedly "confidential" basis, despite it being freely downloadable by the general public in most countries. Sun later expanded this "confidential" source code access to include the full source code of the Java Runtime Environment via a separate program which was open to members of the public, and later made the source of the Java compilerjavac available also. Sun also made the JDK source code available confidentially to theBlackdown Java project, which was a collection of volunteers who ported early versions of the JDK to Linux, or improved on Sun's Linux ports of the JDK. However, none of this was open source, because modification and redistribution without Sun's permission were forbidden in all cases. Sun stated at the time that they were concerned about preventing forking of the Java platform.
However, several independentpartial reimplementations of the Java platform had been created, many of them by theopen-source community, such as theGNU Compiler for Java (GCJ). Sun never filed lawsuits against any of the open sourceclone projects. GCJ notably caused a bad user experience for Java on free software supporting distributions such asFedora andUbuntu which shipped GCJ at the time as their Java implementation. How to replace GCJ with the Sun JDK was a frequently asked question by users, because GCJ was an incomplete implementation, incompatible and buggy.
In 2006Jonathan I. Schwartz became CEO of Sun Microsystems, and signalled his commitment to open source. On 8 May 2007,Sun Microsystems released the Java Development Kit asOpenJDK under the GNU General Public License. Part of the class library (4%) could not be released as open source due to them being licensed from other parties and were included as binary plugs.[citation needed] Because of this, in June 2007,Red Hat launchedIcedTea to resolve the encumbered components with the equivalents fromGNU Classpath implementation. Since the release, most of the encumbrances have been solved, leaving only the audio engine code and colour management system (the latter is to be resolved usingLittle CMS).
The first open-sourcedistributed revision control system (DVCS) was 'tla' in 2001 (since renamed toGNU arch); however, it and its successors 'baz' and 'bzr' (Bazaar) never became very popular, and GNU arch was discontinued, although Bazaar still continues and is used by Canonical.
However, other DVCS projects sprung up, and some started to get significant adoption.
Git, the most popular DVCS, was created in 2005.[70] Some developers of the Linux kernel started to use a proprietary DVCS calledBitKeeper, notably Linux founder Linus Torvalds, although some other kernel developers never used it due to its proprietary nature. The unusual situation whereby Linux kernel development involved the use by some of proprietary software "came to a head" whenAndrew Tridgell started to reverse-engineer BitKeeper with the aim of producing an open-source tool which could provide some of the same functionality as the commercial version. BitMover, the company that developed BitKeeper, in response, in 2005 revoked the special free of-charge license it had granted to certain kernel developers.
As a result of the removal of the BitKeeper license, Linus Torvalds decided to write his own DVCS, called git, because he thought none of the existing open-source DVCSs were suitable for his particular needs as a kernel maintainer (which was why he had adopted BitKeeper in the first place). A number of other developers quickly jumped in and helped him, and git over time grew from a relatively simple "stupid content tracker" (on which some developers developed "porcelain" extensions) into the sophisticated and powerful DVCS that it is today. Torvalds no longer maintains git himself, however; it has been maintained byJunio Hamano for many years, and has continued receiving contributions from many developers.
The increasing popularity of open-source DVCSs such as git, and then, later, DVCS hosting sites, the most popular of which isGitHub (founded 2008), incrementally reduced the barriers to participation in free software projects still further. With sites like GitHub, no longer did potential contributors have to do things like hunt for the URL for the source code repository (which could be in different places on each website, or sometimes tucked away in a README file or developer documentation), or work out how to generate a patch, and if necessary subscribe to the right mailing list so that their patch email would get to the right people. Contributors can simplyfork their own copy of a repository with one click, and issue a pull request from the appropriatebranch when their changes are ready. GitHub has become the most popular hosting site in the world for open-source software, and this, together with the ease of forking and the visibility of forks has made it a popular way for contributors to make changes, large and small.
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(September 2013) (Learn how and when to remove this message) |
Whilecopyright is the primary legal mechanism that FOSS authors use to ensure license compliance for their software, other mechanisms such as legislation,software patents, andtrademarks have uses also. In response to legal issues with patents and theDMCA, the Free Software Foundation releasedversion 3 of its GNU General Public License in 2007 that explicitly addressed the DMCA'sdigital rights management (DRM) provisions and patent rights.
After the development of theGNU GPLv3, as copyright holder of many pieces of the GNU system, such as theGNU Compiler Collection (GCC) software, the FSF updated most[citation needed] of the GNU programs' licenses from GPLv2 to GPLv3.Apple, a user of GCC and a heavy user of both DRM and patents, decided to switch the compiler in itsXcode IDE from GCC toClang, another FOSS compiler,[71] but which is under apermissive license.[72]LWN speculated that Apple was motivated partly by a desire to avoid GPLv3.[71] TheSamba project also switched to GPLv3, which Apple replaced in their software suite with a closed-source, proprietary software alternative.[73]
Recent mergers have affected major open-source software.Sun Microsystems (Sun) acquiredMySQL AB, owner of the popular open-sourceMySQL database, in 2008.[74]
Oracle in turn purchased Sun in January 2010, acquiring their copyrights, patents, and trademarks. This made Oracle the owner of boththe most popular proprietary database and the most popular open-source database (MySQL).[citation needed] Oracle's attempts to commercialize the open-source MySQL database have raised concerns in the FOSS community.[75] Partly in response to uncertainty about the future of MySQL, the FOSS communityforked the project into newdatabase systems outside of Oracle's control. These includeMariaDB,Percona, andDrizzle.[76] All of these have distinct names; they are distinct projects and cannot use the trademarked name MySQL.[77]
In September 2008, Google released the first version ofAndroid, a newsmartphone operating system, as open source (some Google applications that are sometimes but not always bundled with Android are not open source). Initially, the operating system was given away for free by Google, and was eagerly adopted by many handset makers; Google later bought Motorola Mobility and produced its own "vanilla" Android phones and tablets, while continuing to allow other manufacturers to use Android. Android is now the world's most popular mobile platform.[78]
Because Android is based on the Linux kernel, this means that Linux is now the dominant kernel on both mobile platforms (via Android), and supercomputers,[79] and a key player in server operating systems too.
In August 2010, Oracle sued Google claiming that its use of Java in Android infringed on Oracle's copyrights and patents. The initialOracle v. Google trial ended in May 2012, with the finding that Google did not infringe on Oracle's patents, and the trial judge ruled that the structure of the Javaapplication programming interfaces (APIs) used by Google was not copyrightable. The jury found that Google made a trivial ("de minimis") copyright infringement, but the partiesstipulated that Google would pay no damages, because it was so trivial.[80] However, Oracle appealed to theFederal Circuit, and Google filed across-appeal on the literal copying claim.[81] The Federal Circuit ruled that the small copyright infringement acknowledged by Google was notde minimis, and sent the fair use issue back to the trial judge for reconsideration. In 2016, the case was retried and a jury found for Google, on the grounds offair use.
By 2013 Google'sChromebooks, runningChromeOS captured 20–25% of the sub-$300 US laptop market.[82] ChromeOS is built from the open-sourceChromiumOS, which is based on Linux, in much the same way that versions of Android shipped on commercially available phones are built from the open source version of Android.
So ifopen source used to be the norm back in the 1960s and 70s, how did this _change_? Where didproprietary software come from, and when, and how? How didRichard Stallman's little utopia at theMIT AI lab crumble and force him out into the wilderness to try to rebuild it? Two things changed in the early 80s: the exponentially growing installed base of microcomputer hardware reached critical mass around 1980, and a legal decision altered copyright law to cover binaries in 1983.
While IBM's policy of withholding source code for selected software products has already marked its second anniversary, users are only now beginning to cope with the impact of that decision. But whether or not the advent of object-code-only products has affected their day-to-day DP operations, some users remain angry about IBM's decision. Announced in February 1983, IBM's object-code-only policy has been applied to a growing list of Big Blue system software products