Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Digital electronics

From Wikipedia, the free encyclopedia
(Redirected fromDigital system)
Electronic circuits that utilize digital signals
Digital electronics
Adigital signal has two or more distinguishable waveforms, in this example, high voltage and low voltages, each of which can be mapped onto a digit.
An industrialdigital controller

Digital electronics is a field ofelectronics involving the study ofdigital signals and the engineering of devices that use or produce them. It deals with the relationship betweenbinary inputs and outputs by passing electrical signals throughlogical gates,resistors,capacitors,amplifiers, and otherelectronic components. The field of digital electronics is in contrast toanalog electronics, which work primarily withanalog signals (signals with varying degrees of intensity as opposed to on/off two-state binary signals). Despite the name, digital electronics designs include important analog design considerations.

Large assemblies oflogic gates, used to represent more complex ideas, are often packaged intointegrated circuits. Complex devices may have simple electronic representations ofBoolean logic functions.[1]

History

[edit]

Thebinary number system was refined byGottfried Wilhelm Leibniz (published in 1705) and he also established that by using the binary system, the principles of arithmetic and logic could be joined. Digital logic as we know it was the invention ofGeorge Boole in the mid-19th century. In an 1886 letter,Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits.[2] Eventually,vacuum tubes replaced relays for logic operations.Lee De Forest's modification of theFleming valve in 1907 could be used as anAND gate.Ludwig Wittgenstein introduced a version of the 16-rowtruth table as proposition 5.101 ofTractatus Logico-Philosophicus (1921).Walther Bothe, inventor of thecoincidence circuit, shared the 1954Nobel Prize in physics, for creating the first modern electronic AND gate in 1924.

Mechanicalanalog computers started appearing in the first century and were later used in the medieval era for astronomical calculations. InWorld War II, mechanical analog computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronicdigital computers were developed, with the termdigital being proposed byGeorge Stibitz in 1942. Originally they were the size of a large room, consuming as much power as several hundred modernPCs.[3]

Claude Shannon, demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship, ultimately laid the foundations of digital computing and digital circuits in hismaster's thesis of 1937, which is considered to be arguably the most important master's thesis ever written, winning the1939 Alfred Noble Prize.[4][5]

TheZ3 was anelectromechanical computer designed byKonrad Zuse. Finished in 1941, it was the world's first workingprogrammable, fully automatic digital computer.[6] Its operation was facilitated by the invention of the vacuum tube in 1904 byJohn Ambrose Fleming.

At the same time that digital calculation replaced analog, purelyelectronic circuit elements soon replaced their mechanical and electromechanical equivalents.John Bardeen andWalter Brattain invented thepoint-contact transistor atBell Labs in 1947, followed byWilliam Shockley inventing thebipolar junction transistor at Bell Labs in 1948.[7][8]

At theUniversity of Manchester, a team under the leadership ofTom Kilburn designed and built a machine using the newly developedtransistors instead of vacuum tubes.[9] Their "transistorised computer", and the first in the world, wasoperational by 1953, and a second version was completed there in April 1955. From 1955 and onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors were smaller, more reliable, had indefinite lifespans, and required less power than vacuum tubes - thereby giving off less heat, and allowing much denser concentrations of circuits, up to tens of thousands in a relatively compact space.[citation needed]

In 1955,Carl Frosch and Lincoln Derick discovered silicon dioxide surface passivation effects.[10] In 1957 Frosch and Derick, using masking and predeposition, were able to manufacture silicon dioxide field effect transistors; the first planar transistors, in which drain and source were adjacent at the same surface.[11] At Bell Labs, the importance of Frosch and Derick technique and transistors was immediately realized. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. AtShockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, includingJean Hoerni,[12][13][14][15] who would later invent theplanar process in 1959 while atFairchild Semiconductor.[16][17] At Bell Labs, J.R. Ligenza and W.G. Spitzer studied the mechanism of thermally grown oxides, fabricated a high quality Si/SiO2 stack and published their results in 1960.[18][19][20] Following this research at Bell Labs,Mohamed Atalla andDawon Kahng proposed a silicon MOS transistor in 1959[21] and successfully demonstrated a working MOS device with their Bell Labs team in 1960.[22][23] The team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device.[24][25]

While working atTexas Instruments in July 1958,Jack Kilby recorded his initial ideas concerning theintegrated circuit (IC), then successfully demonstrated the first working integrated circuit on 12 September 1958.[26] Kilby's chip was made ofgermanium. The following year,Robert Noyce atFairchild Semiconductor invented thesilicon integrated circuit. The basis for Noyce's silicon IC was Hoerni'splanar process.[citation needed]

The MOSFET's advantages includehigh scalability,[27] affordability,[28] low power consumption, and hightransistor density.[29] Its rapid on–offelectronic switching speed also makes it ideal for generatingpulse trains,[30] the basis for electronicdigital signals,[31][32] in contrast to BJTs which, more slowly, generateanalog signals resemblingsine waves.[30] Along with MOSlarge-scale integration (LSI), these factors make the MOSFET an important switching device fordigital circuits.[33] The MOSFET revolutionized theelectronics industry,[34][35] and is the most commonsemiconductor device.[36][37]

In the early days ofintegrated circuits, each chip was limited to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. The wide adoption of the MOSFET transistor by the early 1970s led to the firstlarge-scale integration (LSI) chips with more than 10,000 transistors on a single chip.[38] Following the wide adoption ofCMOS, a type of MOSFET logic, by the 1980s, millions and then billions of MOSFETs could be placed on one chip as the technology progressed,[39] and good designs required thorough planning, giving rise tonew design methods. Thetransistor count of devices and total production rose to unprecedented heights. The total amount of transistors produced until 2018 has been estimated to be1.3×1022 (13 sextillion).[40]

Thewireless revolution (the introduction and proliferation ofwireless networks) began in the 1990s and was enabled by the wide adoption of MOSFET-basedRF power amplifiers (power MOSFET andLDMOS) andRF circuits (RF CMOS).[41][42][43] Wireless networks allowed for public digital transmission without the need for cables, leading todigital television,satellite anddigital radio,GPS,wireless Internet andmobile phones through the 1990s–2000s.[citation needed]

Properties

[edit]

An advantage of digital circuits when compared to analog circuits is that signals represented digitally can be transmitted without degradation caused bynoise.[44] For example, a continuous audio signal transmitted as a sequence of 1s and 0s, can be reconstructed without error, provided the noise picked up in transmission is not enough to prevent identification of the 1s and 0s.

In a digital system, a more precise representation of a signal can be obtained by using more binary digits to represent it. While this requires more digital circuits to process the signals, each digit is handled by the same kind of hardware, resulting in an easilyscalable system. In an analog system, additional resolution requires fundamental improvements in the linearity and noise characteristics of each step of thesignal chain.

With computer-controlled digital systems, new functions can be added through software revision and no hardware changes are needed. Often this can be done outside of the factory by updating the product's software. This way, the product's design errors can be corrected even after the product is in a customer's hands.

Information storage can be easier in digital systems than in analog ones. The noise immunity of digital systems permits data to be stored and retrieved without degradation. In an analog system, noise from aging and wear degrade the information stored. In a digital system, as long as the total noise is below a certain level, the information can be recovered perfectly. Even when more significant noise is present, the use ofredundancy permits the recovery of the original data provided too many errors do not occur.

In some cases, digital circuits use more energy than analog circuits to accomplish the same tasks, thus producing more heat which increases the complexity of the circuits such as the inclusion of heat sinks. In portable or battery-powered systems this can limit the use of digital systems. For example, battery-poweredcellular phones often use a low-power analog front-end toamplify andtune the radio signals from the base station. However, a base station has grid power and can use power-hungry, but very flexiblesoftware radios. Such base stations can easily be reprogrammed to process the signals used in new cellular standards.

Many useful digital systems must translate from continuous analog signals to discrete digital signals. This causesquantization errors. Quantization error can be reduced if the system stores enough digital data to represent the signal to the desired degree offidelity. TheNyquist–Shannon sampling theorem provides an important guideline as to how much digital data is needed to accurately portray a given analog signal.

If a single piece of digital data is lost or misinterpreted, in some systems only a small error may result, while in other systems the meaning of large blocks of related data can completely change. For example, a single-bit error in audio data stored directly aslinear pulse-code modulation causes, at worst, a single audible click. But when usingaudio compression to save storage space and transmission time, a single bit error may cause a much larger disruption.

Because of thecliff effect, it can be difficult for users to tell if a particular system is right on the edge of failure, or if it can tolerate much more noise before failing. Digital fragility can be reduced by designing a digital system forrobustness. For example, aparity bit or other error management method can be inserted into the signal path. These schemes help the system detect errors, and then eithercorrect the errors, or request retransmission of the data.

Further information:Digital signal conditioning andSignal conditioning

Construction

[edit]
Abinary clock, hand-wired onbreadboards

A digital circuit is typically constructed from small electronic circuits calledlogic gates that can be used to createcombinational logic andsequential logic. Each logic gate is designed to perform a function ofBoolean logic when acting on logic signals. A logic gate is generally created from one or more electrically controlled switches, usuallytransistors butthermionic valves have seen historic use. The output of a logic gate can, in turn, control or feed into more logic gates.

Another form of digital circuit is constructed from lookup tables, (many sold as "programmable logic devices", though other kinds of PLDs exist). Lookup tables can perform the same functions as machines based on logic gates, but can be easily reprogrammed without changing the wiring. This means that a designer can often repair design errors without changing the arrangement of wires. Therefore, in small-volume products, programmable logic devices are often the preferred solution. They are usually designed by engineers usingelectronic design automation software.

Integrated circuits consist of multiple transistors on one silicon chip and are the least expensive way to make a large number of interconnected logic gates. Integrated circuits are usually interconnected on aprinted circuit board which is a board that holds electrical components, and connects them together with copper traces.

Design

[edit]

Engineers use many methods to minimizelogic redundancy in order to reduce the circuit complexity. Reduced complexity reduces component count and potential errors and therefore typically reduces cost. Logic redundancy can be removed by several well-known techniques, such asbinary decision diagrams,Boolean algebra,Karnaugh maps, theQuine–McCluskey algorithm, and theheuristic computer method. These operations are typically performed within acomputer-aided design system.

Embedded systems withmicrocontrollers andprogrammable logic controllers are often used to implement digital logic for complex systems that do not require optimal performance. These systems are usually programmed bysoftware engineers or by electricians, usingladder logic.

Representation

[edit]

A digital circuit's input-output relationship can be represented as atruth table. An equivalent high-level circuit useslogic gates, each represented by a different shape (standardized byIEEE/ANSI 91–1984).[45] A low-level representation uses an equivalent circuit of electronic switches (usuallytransistors).

Most digital systems divide into combinational and sequential systems. The output of a combinational system depends only on the present inputs. However, a sequential system has some of its outputs fed back as inputs, so its output may depend on past inputs in addition to present inputs, to produce asequence of operations. Simplified representations of their behavior calledstate machines facilitate design and test.

Sequential systems divide into two further subcategories."Synchronous" sequential systems change state all at once when aclock signal changes state."Asynchronous" sequential systems propagate changes whenever inputs change. Synchronous sequential systems are made usingflip flops that store inputted voltages as abit only when the clock changes.

Synchronous systems

[edit]
A 4-bit ring counter using D-type flip flops is an example of synchronous logic. Each device is connected to the clock signal, and update together.
Main article:synchronous logic

The usual way to implement a synchronous sequential state machine is to divide it into a piece of combinational logic and a set of flip flops called astate register. The state register represents the state as a binary number. The combinational logic produces the binary representation for the next state. On each clock cycle, the state register captures the feedback generated from the previous state of the combinational logic and feeds it back as an unchanging input to the combinational part of the state machine. The clock rate is limited by the most time-consuming logic calculation in the combinational unit logic.

Asynchronous systems

[edit]

Most digital logic is synchronous because it is easier to create and verify a synchronous design. However, asynchronous logic has the advantage of its speed not being constrained by an arbitrary clock; instead, it runs at the maximum speed of its logic gates.[a]

Nevertheless, most systems need to accept external unsynchronized signals into their synchronous logic circuits. This interface is inherently asynchronous and must be analyzed as such. Examples of widely used asynchronous circuits include synchronizer flip-flops, switchdebouncers andarbiters.

Asynchronous logic components can be hard to design because all possible states, in all possible timings must be considered. The usual method is to construct a table of the minimum and maximum time that each such state can exist and then adjust the circuit to minimize the number of such states. The designer must force the circuit to periodically wait for all of its parts to enter a compatible state (this is called "self-resynchronization"). Without careful design, it is easy to accidentally produce asynchronous logic that is unstable—that is—real electronics will have unpredictable results because of the cumulative delays caused by small variations in the values of the electronic components.

Register transfer systems

[edit]
Example of a simple circuit with a toggling output. The inverter forms thecombinational logic in this circuit, and the register holds the state.

Many digital systems aredata flow machines. These are usually designed using synchronousregister transfer logic and written withhardware description languages such asVHDL orVerilog.

In register transfer logic, binary numbers are stored in groups of flip flops calledregisters. A sequential state machine controls when each register accepts new data from its input. The outputs of each register are a bundle of wires called abus that carries that number to other calculations. A calculation is simply a piece of combinational logic. Each calculation also has an output bus, and these may be connected to the inputs of several registers. Sometimes a register will have amultiplexer on its input so that it can store a number from any one of several buses.[b]

Asynchronous register-transfer systems (such as computers) have a general solution. In the 1980s, some researchers discovered that almost all synchronous register-transfer machines could be converted to asynchronous designs by using first-in-first-out synchronization logic. In this scheme, the digital machine is characterized as a set of data flows. In each step of the flow, a synchronization circuit determines when the outputs of that step are valid and instructs the next stage when to use these outputs.[citation needed]

Computer design

[edit]
Intel 80486DX2microprocessor

The most general-purpose register-transfer logic machine is acomputer. This is basically anautomatic binaryabacus. Thecontrol unit of a computer is usually designed as amicroprogram run by amicrosequencer. A microprogram is much like a player-piano roll. Each table entry of the microprogram commands the state of every bit that controls the computer. The sequencer then counts, and the count addresses the memory or combinational logic machine that contains the microprogram. The bits from the microprogram control thearithmetic logic unit,memory and other parts of the computer, including the microsequencer itself. In this way, the complex task of designing the controls of a computer is reduced to the simpler task of programming a collection of much simpler logic machines.

Almost all computers are synchronous. However,asynchronous computers have also been built. One example is theASPIDA DLX core.[47] Another was offered byARM Holdings.[48] They do not, however, have any speed advantages because modern computer designs already run at the speed of their slowest component, usually memory. They do use somewhat less power because a clock distribution network is not needed. An unexpected advantage is that asynchronous computers do not produce spectrally-pure radio noise. They are used in some radio-sensitive mobile-phone base-station controllers. They may be more secure in cryptographic applications because their electrical and radio emissions can be more difficult to decode.[48]

Computer architecture

[edit]

Computer architecture is a specialized engineering activity that tries to arrange the registers, calculation logic, buses and other parts of the computer in the best way possible for a specific purpose. Computer architects have put a lot of work into reducing the cost and increasing the speed of computers in addition to boosting their immunity to programming errors. An increasingly common goal of computer architects is to reduce the power used in battery-powered computer systems, such assmartphones.

Design issues in digital circuits

[edit]
icon
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(September 2015) (Learn how and when to remove this message)

Digital circuits are made from analog components. The design must assure that the analog nature of the components does not dominate the desired digital behavior. Digital systems must manage noise and timing margins, parasitic inductances and capacitances.

Bad designs have intermittent problems such asglitches, vanishingly fast pulses that may trigger some logic but not others,runt pulses that do not reach validthreshold voltages.

Additionally, where clocked digital systems interface to analog systems or systems that are driven from a different clock, the digital system can be subject tometastability where a change to the input violates thesetup time for a digital input latch.

Since digital circuits are made from analog components, digital circuits calculate more slowly than low-precision analog circuits that use a similar amount of space and power. However, the digital circuit will calculate more repeatably, because of its high noise immunity.

Automated design tools

[edit]
icon
This sectiondoes notcite anysources. Please helpimprove this section byadding citations to reliable sources. Unsourced material may be challenged andremoved.(June 2021) (Learn how and when to remove this message)

Much of the effort of designing large logic machines has been automated through the application ofelectronic design automation (EDA).

Simple truth table-style descriptions of logic are often optimized with EDA that automatically produce reduced systems of logic gates or smaller lookup tables that still produce the desired outputs. The most common example of this kind of software is theEspresso heuristic logic minimizer. Optimizing large logic systems may be done using theQuine–McCluskey algorithm orbinary decision diagrams. There are promising experiments withgenetic algorithms andannealing optimizations.

To automate costly engineering processes, some EDA can takestate tables that describestate machines and automatically produce a truth table or afunction table for thecombinational logic of a state machine. The state table is a piece of text that lists each state, together with the conditions controlling the transitions between them and their associated output signals.

Often, real logic systems are designed as a series of sub-projects, which are combined using atool flow. The tool flow is usually controlled with the help of ascripting language, a simplified computer language that can invoke the software design tools in the right order. Tool flows for large logic systems such asmicroprocessors can be thousands of commands long, and combine the work of hundreds of engineers. Writing and debugging tool flows is an established engineering specialty in companies that produce digital designs. The tool flow usually terminates in a detailed computer file or set of files that describe how to physically construct the logic. Often it consists of instructions on how to draw thetransistors and wires on an integrated circuit or aprinted circuit board.

Parts of tool flows are debugged by verifying the outputs of simulated logic against expected inputs. The test tools take computer files with sets of inputs and outputs and highlight discrepancies between the simulated behavior and the expected behavior. Once the input data is believed to be correct, the design itself must still be verified for correctness. Some tool flows verify designs by first producing a design, then scanning the design to produce compatible input data for the tool flow. If the scanned data matches the input data, then the tool flow has probably not introduced errors.

The functionalverification data are usually calledtest vectors. The functional test vectors may be preserved and used in the factory to test whether newly constructed logic works correctly. However, functional test patterns do not discover all fabrication faults. Production tests are often designed byautomatic test pattern generation software tools. These generate test vectors by examining the structure of the logic and systematically generating tests targeting particular potential faults. This way thefault coverage can closely approach 100%, provided the design is properly made testable (see next section).

Once a design exists, and is verified and testable, it often needs to be processed to be manufacturable as well. Modern integrated circuits have features smaller than the wavelength of the light used to expose the photoresist. Software that aredesigned for manufacturability add interference patterns to the exposure masks to eliminate open-circuits and enhance the masks' contrast.

Design for testability

[edit]

There are several reasons for testing a logic circuit. When the circuit is first developed, it is necessary to verify that the design circuit meets the required functional, and timing specifications. When multiple copies of a correctly designed circuit are being manufactured, it is essential to test each copy to ensure that the manufacturing process has not introduced any flaws.[49]

A large logic machine (say, with more than a hundred logical variables) can have an astronomical number of possible states. Obviously, factory testing every state of such a machine is unfeasible, for even if testing each state only took a microsecond, there are more possible states than there are microseconds since the universe began!

Large logic machines are almost always designed as assemblies of smaller logic machines. To save time, the smaller sub-machines are isolated by permanently installeddesign for test circuitry and are tested independently. One common testing scheme provides a test mode that forces some part of the logic machine to enter atest cycle. The test cycle usually exercises large independent parts of the machine.

Boundary scan is a common test scheme that usesserial communication with external test equipment through one or moreshift registers known asscan chains. Serial scans have only one or two wires to carry the data, and minimize the physical size and expense of the infrequently used test logic. After all the test data bits are in place, the design is reconfigured to be innormal mode and one or more clock pulses are applied, to test for faults (e.g. stuck-at low or stuck-at high) and capture the test result into flip-flops or latches in the scan shift register(s). Finally, the result of the test is shifted out to the block boundary and compared against the predictedgood machine result.

In a board-test environment, serial-to-parallel testing has been formalized as theJTAG standard.

Trade-offs

[edit]

Cost

[edit]

Since a digital system may use many logic gates, the overall cost of building a computer correlates strongly with the cost of a logic gate. In the 1930s, the earliest digital logic systems were constructed from telephone relays because these were inexpensive and relatively reliable.

The earliest integrated circuits were constructed to save weight and permit theApollo Guidance Computer to control aninertial guidance system for a spacecraft. The first integrated circuit logic gates cost nearly US$50, which in 2024 would be equivalent to $531. Mass-produced gates on integrated circuits became the least-expensive method to construct digital logic.

With the rise ofintegrated circuits, reducing the absolute number of chips used represented another way to save costs. The goal of a designer is not just to make the simplest circuit, but to keep the component count down. Sometimes this results in more complicated designs with respect to the underlying digital logic but nevertheless reduces the number of components, board size, and even power consumption.

Reliability

[edit]

Another major motive for reducing component count on printed circuit boards is to reduce the manufacturing defect rate due to failed soldered connections and increase reliability. Defect and failure rates tend to increase along with the total number of component pins.

The failure of a single logic gate may cause a digital machine to fail. Where additional reliability is required, redundant logic can be provided. Redundancy adds cost and power consumption over a non-redundant system.

Thereliability of a logic gate can be described by itsmean time between failure (MTBF). Digital machines first became useful when the MTBF for a switch increased above a few hundred hours. Even so, many of these machines had complex, well-rehearsed repair procedures, and would be nonfunctional for hours because a tube burned-out, or a moth got stuck in a relay. Modern transistorized integrated circuit logic gates have MTBFs greater than 82 billion hours (8.2×1010 h).[50] This level of reliability is required because integrated circuits have so many logic gates.

Fan-out

[edit]

Fan-out describes how many logic inputs can be controlled by a single logic output without exceeding the electrical current ratings of the gate outputs.[51] The minimum practical fan-out is about five.[citation needed] Modern electronic logic gates usingCMOS transistors for switches have higher fan-outs.

Speed

[edit]

Theswitching speed describes how long it takes a logic output to change from true to false or vice versa. Faster logic can accomplish more operations in less time. Modern electronic digital logic routinely switches atGHz, and some laboratory systems switch at more thanTHz.[citation needed].

Logic families

[edit]
Main article:Logic family

Digital design started withrelay logic which is slow. Occasionally a mechanical failure would occur. Fan-outs were typically about 10, limited by the resistance of the coils and arcing on the contacts from high voltages.

Later,vacuum tubes were used. These were very fast, but generated heat, and were unreliable because the filaments would burn out. Fan-outs were typically 5 to 7, limited by the heating from the tubes' current. In the 1950s, special computer tubes were developed with filaments that omitted volatile elements like silicon. These ran for hundreds of thousands of hours.

The firstsemiconductor logic family wasresistor–transistor logic. This was a thousand times more reliable than tubes, ran cooler, and used less power, but had a very low fan-out of 3.Diode–transistor logic improved the fan-out up to about 7, and reduced the power. Some DTL designs used two power supplies with alternating layers of NPN and PNP transistors to increase the fan-out.

Transistor–transistor logic (TTL) was a great improvement over these. In early devices, fan-out improved to 10, and later variations reliably achieved 20. TTL was also fast, with some variations achieving switching times as low as 20 ns. TTL is still used in some designs.

Emitter coupled logic is very fast but uses a lot of power. It was extensively used for high-performance computers, such as theIlliac IV, made up of many medium-scale components.

By far, the most common digital integrated circuits built today useCMOS logic, which is fast, offers high circuit density and low power per gate. This is used even in large, fast computers, such as theIBM System z.

Recent developments

[edit]

In 2009, researchers discovered thatmemristors can implement a Boolean state storage and provides a complete logic family with very small amounts of space and power, using familiar CMOS semiconductor processes.[52]

The discovery ofsuperconductivity has enabled the development ofrapid single flux quantum (RSFQ) circuit technology, which usesJosephson junctions instead of transistors. Most recently, attempts are being made to construct purelyoptical computing systems capable of processing digital information usingnonlinear optical elements.

See also

[edit]

Notes

[edit]
  1. ^An example of an early asynchronous digital computer was the Jaincomp-B1 manufactured by the Jacobs Instrument Company in 1951.[46]
  2. ^Alternatively, the outputs of several items may be connected to a bus throughbuffers that can turn off the output of all of the devices except one.

References

[edit]
  1. ^Null, Linda; Lobur, Julia (2006).The essentials of computer organization and architecture. Jones & Bartlett Publishers. p. 121.ISBN 978-0-7637-3769-6.We can build logic diagrams (which in turn lead to digital circuits) for any Boolean expression...
  2. ^Peirce, C. S., "Letter, Peirce toA. Marquand", dated 1886,Writings of Charles S. Peirce, v. 5, 1993, pp. 541–3. GooglePreview. SeeBurks, Arthur W., "Review: Charles S. Peirce,The new elements of mathematics",Bulletin of the American Mathematical Society v. 84, n. 5 (1978), pp. 913–18, see 917.PDF Eprint.
  3. ^In 1946,ENIAC required an estimated 174 kW. By comparison, a modern laptop computer may use around 30 W; nearly six thousand times less."Approximate Desktop & Notebook Power Usage". University of Pennsylvania. Archived fromthe original on 3 June 2009. Retrieved20 June 2009.
  4. ^Kennedy, Noah (2018).The Industrialization of Intelligence: Mind and Machine in the Modern Age. London New York: Routledge, Taylor & Francis Group. pp. 87–89.ISBN 978-0-8153-4954-9.
  5. ^Chow, Rony (2021-06-05)."Claude Shannon: The Father of Information Theory".History of Data Science. Retrieved2024-11-05.
  6. ^"A Computer Pioneer Rediscovered, 50 Years On".The New York Times. April 20, 1994.
  7. ^Lee, Thomas H. (2003).The Design of CMOS Radio-Frequency Integrated Circuits(PDF).Cambridge University Press.ISBN 9781139643771.Archived(PDF) from the original on 2022-10-09.
  8. ^Puers, Robert; Baldi, Livio; Voorde, Marcel Van de; Nooten, Sebastiaan E. van (2017).Nanoelectronics: Materials, Devices, Applications, 2 Volumes.John Wiley & Sons. p. 14.ISBN 9783527340538.
  9. ^Lavington, Simon (1998),A History of Manchester Computers (2 ed.), Swindon: The British Computer Society, pp. 34–35
  10. ^US2802760A, Lincoln, Derick & Frosch, Carl J., "Oxidation of semiconductive surfaces for controlled diffusion", issued 1957-08-13 
  11. ^Frosch, C. J.; Derick, L (1957)."Surface Protection and Selective Masking during Diffusion in Silicon".Journal of the Electrochemical Society.104 (9): 547.doi:10.1149/1.2428650.
  12. ^Moskowitz, Sanford L. (2016).Advanced Materials Innovation: Managing Global Technology in the 21st century.John Wiley & Sons. p. 168.ISBN 978-0-470-50892-3.
  13. ^Christophe Lécuyer; David C. Brook; Jay Last (2010).Makers of the Microchip: A Documentary History of Fairchild Semiconductor. MIT Press. pp. 62–63.ISBN 978-0-262-01424-3.
  14. ^Claeys, Cor L. (2003).ULSI Process Integration III: Proceedings of the International Symposium.The Electrochemical Society. pp. 27–30.ISBN 978-1-56677-376-8.
  15. ^Lojek, Bo (2007).History of Semiconductor Engineering.Springer Science & Business Media. p. 120.ISBN 9783540342588.
  16. ^US 3025589  Hoerni, J. A.: "Method of Manufacturing Semiconductor Devices” filed May 1, 1959
  17. ^US 3064167  Hoerni, J. A.: "Semiconductor device" filed May 15, 1960
  18. ^Ligenza, J. R.; Spitzer, W. G. (1960-07-01)."The mechanisms for silicon oxidation in steam and oxygen".Journal of Physics and Chemistry of Solids.14:131–136.Bibcode:1960JPCS...14..131L.doi:10.1016/0022-3697(60)90219-5.ISSN 0022-3697.
  19. ^Deal, Bruce E. (1998)."Highlights Of Silicon Thermal Oxidation Technology".Silicon materials science and technology.The Electrochemical Society. p. 183.ISBN 978-1566771931.
  20. ^Lojek, Bo (2007).History of Semiconductor Engineering. Springer Science & Business Media. p. 322.ISBN 978-3540342588.
  21. ^Bassett, Ross Knox (2007).To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS Technology.Johns Hopkins University Press. pp. 22–23.ISBN 978-0-8018-8639-3.
  22. ^Atalla, M.;Kahng, D. (1960). "Silicon-silicon dioxide field induced surface devices".IRE-AIEE Solid State Device Research Conference.
  23. ^"1960 – Metal Oxide Semiconductor (MOS) Transistor Demonstrated".The Silicon Engine.Computer History Museum. Retrieved2023-01-16.
  24. ^KAHNG, D. (1961)."Silicon-Silicon Dioxide Surface Device".Technical Memorandum of Bell Laboratories:583–596.doi:10.1142/9789814503464_0076.ISBN 978-981-02-0209-5.{{cite journal}}:ISBN / Date incompatibility (help)
  25. ^Lojek, Bo (2007).History of Semiconductor Engineering. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg. p. 321.ISBN 978-3-540-34258-8.
  26. ^"The Chip that Jack Built". Texas Instruments. 2008. Retrieved29 May 2008.
  27. ^Motoyoshi, M. (2009). "Through-Silicon Via (TSV)".Proceedings of the IEEE.97 (1):43–48.doi:10.1109/JPROC.2008.2007462.ISSN 0018-9219.S2CID 29105721.
  28. ^"Tortoise of Transistors Wins the Race - CHM Revolution".Computer History Museum. Retrieved22 July 2019.
  29. ^"Transistors Keep Moore's Law Alive".EETimes. 12 December 2018. Retrieved18 July 2019.
  30. ^ab"Applying MOSFETs to Today's Power-Switching Designs".Electronic Design. 23 May 2016. Retrieved10 August 2019.
  31. ^B. SOMANATHAN NAIR (2002).Digital electronics and logic design. PHI Learning Pvt. Ltd. p. 289.ISBN 9788120319561.Digital signals are fixed-width pulses, which occupy only one of two levels of amplitude.
  32. ^Joseph Migga Kizza (2005).Computer Network Security. Springer Science & Business Media.ISBN 9780387204734.
  33. ^2000 Solved Problems in Digital Electronics.Tata McGraw-Hill Education. 2005. p. 151.ISBN 978-0-07-058831-8.
  34. ^Chan, Yi-Jen (1992).Studies of InAIAs/InGaAs and GaInP/GaAs heterostructure FET's for high speed applications.University of Michigan. p. 1.The Si MOSFET has revolutionized the electronics industry and as a result impacts our daily lives in almost every conceivable way.
  35. ^Grant, Duncan Andrew; Gowar, John (1989).Power MOSFETS: theory and applications.Wiley. p. 1.ISBN 9780471828679.The metal–oxide–semiconductor field-effect transistor (MOSFET) is the most commonly used active device in the very large-scale integration of digital integrated circuits (VLSI). During the 1970s these components revolutionized electronic signal processing, control systems and computers.
  36. ^"Who Invented the Transistor?".Computer History Museum. 4 December 2013. Retrieved20 July 2019.
  37. ^Golio, Mike; Golio, Janet (2018).RF and Microwave Passive and Active Technologies.CRC Press. pp. 18–2.ISBN 9781420006728.
  38. ^Hittinger, William C. (1973). "Metal-Oxide-Semiconductor Technology".Scientific American.229 (2):48–59.Bibcode:1973SciAm.229b..48H.doi:10.1038/scientificamerican0873-48.ISSN 0036-8733.JSTOR 24923169.
  39. ^Peter Clarke (14 October 2005)."Intel enters billion-transistor processor era".EE Times.
  40. ^"13 Sextillion & Counting: The Long & Winding Road to the Most Frequently Manufactured Human Artifact in History".Computer History Museum. April 2, 2018. Retrieved12 October 2020.
  41. ^Golio, Mike; Golio, Janet (2018).RF and Microwave Passive and Active Technologies.CRC Press. pp. ix, I-1,18–2.ISBN 9781420006728.
  42. ^Rappaport, T. S. (November 1991). "The wireless revolution".IEEE Communications Magazine.29 (11):52–71.Bibcode:1991IComM..29k..52R.doi:10.1109/35.109666.S2CID 46573735.
  43. ^"The wireless revolution".The Economist. January 21, 1999. Retrieved12 September 2019.
  44. ^Paul Horowitz and Winfield Hill,The Art of Electronics 2nd Ed. Cambridge University Press, Cambridge, 1989ISBN 0-521-37095-7 page 471
  45. ^Maini. A.K. (2007). Digital Electronics Principles, Devices and Applications. Chichester, England.: John Wiley & Sons Ltd.
  46. ^Pentagon symposium:Commercially Available General Purpose Electronic Digital Computers of Moderate Price, Washington, D.C., 14 MAY 1952
  47. ^"ASODA sync/async DLX Core".OpenCores.org. RetrievedSeptember 5, 2014.
  48. ^abClarke, Peter."ARM Offers First Clockless Processor Core".eetimes.com. UBM Tech (Universal Business Media). Retrieved5 September 2014.
  49. ^Brown S & Vranesic Z. (2009). Fundamentals of Digital Logic with VHDL Design. 3rd ed. New York, N.Y.: Mc Graw Hill.
  50. ^MIL-HDBK-217F notice 2, section 5.3, for 100,000 gate 0.8 micrometre CMOS commercial ICs at 40C; failure rates in 2010 are better, because line sizes have decreased to 0.045 micrometres, and fewer off-chip connections are needed per gate.
  51. ^Kleitz , William. (2002). Digital and Microprocessor Fundamentals: Theory and Application. 4th ed. Upper Saddler Reviver, NJ: Pearson/Prentice Hall
  52. ^Lehtonen, Eero; Laiho, Mika (2009).Stateful implication logic with memristors.2009 IEEE/ACM International Symposium on Nanoscale Architectures. pp. 33–36.doi:10.1109/NANOARCH.2009.5226356.ISBN 978-1-4244-4957-6.

Further reading

[edit]
  • Douglas Lewin,Logical Design of Switching Circuits, Nelson,1974.
  • R. H. Katz,Contemporary Logic Design, The Benjamin/Cummings Publishing Company, 1994.
  • P. K. Lala,Practical Digital Logic Design and Testing, Prentice Hall, 1996.
  • Y. K. Chan and S. Y. Lim, Progress In Electromagnetics Research B, Vol. 1, 269–290, 2008, "Synthetic Aperture Radar (SAR) Signal Generation, Faculty of Engineering & Technology, Multimedia University, Jalan Ayer Keroh Lama, Bukit Beruang, Melaka 75450, Malaysia.

External links

[edit]
Wikimedia Commons has media related toDigital electronics.
Components
Theory
Design
Applications
Design issues
Semiconductor
devices
MOS
transistors
Other
transistors
Diodes
Other
devices
Voltage regulators
Vacuum tubes
Vacuum tubes (RF)
Cathode ray tubes
Gas-filled tubes
Adjustable
Passive
Reactive
Other devices
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Digital_electronics&oldid=1334262902"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp