
IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.


It's not often in the rarefied world of technological research that an esoteric paper is greeted with scoffing. It's even rarer that the paper proves in the end to be truly revolutionary.
It happened a decade ago at the 1993 IEEE International Conference on Communications in Geneva, Switzerland. Two French electrical engineers, Claude Berrou and Alain Glavieux, made a flabbergasting claim: they had invented a digital coding scheme that could provide virtually error-free communications at data rates and transmitting-power efficiencies well beyond what most experts thought possible.
The scheme, the authors claimed, could double data throughput for a given transmitting power or, alternatively, achieve a specified communications data rate with half the transmitting energy--a tremendous gain that would be worth a fortune to communications companies.
Few veteran communications engineers believed the results. The Frenchmen, both professors in the electronics department at the Ecole Nationale Supérieure des Télécommunications de Bretagne in Brest, France, were then unknown in the information-theory community. They must have gone astray in their calculations, some reasoned. The claims were so preposterous that many experts didn't even bother to read the paper.
Unbelievable as it seemed, it soon proved true, as other researchers began to replicate the results. Coding experts then realized the significance of that work. Berrou and Glavieux were right, and their error-correction coding scheme, which has since been dubbed turbo codes, has revolutionized error-correction coding. Chances are fairly good that the next cellphone you buy will have them built in.
From a niche technology first applied mainly in satellite links and in at least one deep-space communications system, turbo codes are about to go mainstream. As they are incorporated into the next-generation mobile telephone system, millions of people will soon have them literally in their hands. This coding scheme will let cellphones and other portable devices handle multimedia data such as video and graphics-rich imagery over the noisy channels typical of cellular communications. And researchers are studying the use of turbo codes for digital audio and video broadcasting, as well as for increasing data speeds in enhanced versions of Wi-Fi networks.
With possibilities like these, turbo codes have jumped to the forefront of communications research, with hundreds of groups working on them in companies and universities all over the world. The list includes telecommunications giants like France Télécom and NTT DoCoMo; high-tech heavyweights like Sony, NEC, Lucent, Samsung, Ericsson, Nokia, Motorola, and Qualcomm; hardware and chip manufacturers like Broadcom, Conexant, Comtech AHA, and STMicroelectronics; and start-ups like Turboconcept and iCoding.
Turbo codes do a simple but incredible thing : they let engineers design systems that come extremely close to the so-called channel capacity--the absolute maximum capacity, in bits per second, of a communications channel for a given power level at the transmitter. This threshold for reliable communications was discovered by the famed Claude Shannon, the brilliant electrical engineer and mathematician who worked at Bell Telephone Laboratories in Murray Hill, N.J., and is renowned as the father of information theory [see sidebar, "Shannon: Cracking the Channel"].
In 1948, Claude Shannon, then a young engineer working at Bell Telephone Laboratories in Murray Hill, N.J., published a landmark paper titled "A Mathematical Theory of Communication."
In that paper, Shannon defined what the once fuzzy concept of "information" meant for communications engineers and proposed a precise way to quantify it: in his theory, the fundamental unit of information is the bit.
Shannon showed that every communications channel has a maximum rate for reliable data transmission, which he called the channel capacity, measured in bits per second. He demonstrated that by using certain coding schemes, you could transmit data up to the channel's full capacity, virtually free of errors--an astonishing result that surprised engineers at the time.
"I can't think of anybody who could ever have guessed that such a theory existed," says Robert Fano, an emeritus professor of computer science at the Massachusetts Institute of Technology, in Cambridge, and a pioneer in the information theory field. "It's just an intellectual jump; it's very profound."
The channel capacity became an essential benchmark for communications engineers, a measure of what a system can and cannot do, expressed in many cases by the famous formula,C =W log2 (1 +P /N ). In the formula,C is the capacity in bits per second,W is the bandwidth in hertz,P is the transmitter power in watts, andN is the noise power, also in watts.
From space probes to cellphones and CD players, Shannon's ideas are invisibly embedded in the digital technologies that make our lives more interesting and comfortable.
A tinkerer, juggling enthusiast, and exceptional chess player, Shannon was also famous for riding the halls of Bell Labs on a unicycle. He died on 24 February 2001, at age 84, after a long battle with Alzheimer's disease.
In a landmark 1948 paper, Shannon, who died in 2001, showed that with the right error-correction codes, data could be transmitted at speeds up to the channel capacity, virtually free from errors, and with surprisingly low transmitting power. Before Shannon's work, engineers thought that to reduce communications errors, it was necessary to increase transmission power or to send the same message repeatedly--much as when, in a crowded pub, you have to shout for a beer several times.
Shannon basically showed it wasn't necessary to waste so much energy and time if you had the right coding schemes. After his discovery, the field of coding theory thrived, and researchers developed fairly good codes. But still, before turbo codes, even the best codes usually required more than twice the transmitting power that Shannon's law said was necessary to reach a certain level of reliability--a huge waste of energy. The gap between the practical and the ideal, measured in decibels--a ratio between the signal level and the noise level on a logarithmic scale--was about 3.5 dB. To chip away at it, engineers needed more elaborate codes.
That was the goal that persisted for more than four decades, until Berrou and Glavieux made their discovery in the early 1990s. When they introduced turbo codes in 1993, they showed it was possible to get within an astonishing 0.5 dB of the Shannon limit, for a bit-error rate of one in 100 000. Today, turbo codes are still chipping away at even that small gap.
The solution to overcoming the noise that plagued all communications channels, according to Shannon's seminal paper, was to divide the data into strings of bits and add to each string a set of extra bits--called parity bits--that would help identify and correct errors at the receiving end. The resulting group of bits--the data bits plus the parity bits--is called a codeword, and typically it represents a block of characters, a few image pixels, a sample of voice, or some other piece of data.
Shannon showed that with the right collection of codewords--with the right code, in other words--it was possible to attain the channel capacity. But then, which code could do it? "Shannon left unanswered the question of inventing codes," says David Forney, a professor of electrical engineering at the Cambridge-based Massachusetts Institute of Technology (MIT) and an IEEE Fellow. Shannon proved mathematically that coding was the means to reach capacity, but he didn't show exactly how to construct these capacity-approaching codes. His work, nevertheless, contained valuable clues.
Shannon thought of codewords as points in space. For example, the codeword 011 can be considered a point in a three-dimensional space with coordinatesx = 0,y = 1, andz = 1. Codewords with more than three bits are points in hyperspace. Noise can alter a codeword's bits, and therefore its coordinates, displacing the point in space. If two points are close to each other and one is affected by noise, this point might fall exactly onto the other, resulting in decoding error. Therefore, the larger the differences in codewords--the farther apart they are--the more difficult it is for noise to cause errors.
To achieve capacity, Shannon demonstrated that you should randomly choose infinitely long codewords. In other words, going back to his spatial analogy, if you could make the codewords both random and as long as you wanted, you could put the points arbitrarily far from each other in space. There would be essentially no possibility of one point erroneously falling on another. Unfortunately, such long, random codes are not practical: first, because there is an astronomical number of codewords; second, because this code would be extremely slow to use as you transmitted many, many bits for just one codeword. Still, the random nature of a good code would turn out to be critical for turbo codes.
Coding experts put aside Shannon's ideal random codes, as they concentrated on developing practical codes that could be implemented in real systems. They soon began to develop good codes by cleverly choosing parity bits that constrained codewords to certain values, making these codewords unlikely to be confused with other ones.
For example, suppose we have an eight-bit codeword (seven data bits plus one parity bit). Suppose we further insist that all the codewords have an even number of 1s, making that extra parity bit a 1 if necessary to fulfill that requirement. Now, if any of the eight bits is altered by noise, including the parity bit itself, the receiver knows there was an error, because the parity count won't check--there would be an odd number of 1s.
This basic scheme can detect an error, but it can't correct it--you don't know which bit was flipped. To correct errors, you need more parity bits. Coding experts have come up with numerous and ever more sophisticated ways of generating parity bits. Block codes, Hamming codes, Reed-Solomon codes, and convolutional codes are widely used and achieve very low error rates.
Nevertheless, a computational-complexity problem hounded coding specialists and plagued all these codes. The complexity problem emerges as you figure the cost of a code in terms of the amount of computation required to decode your data. The closer you get to Shannon's limit, the more complicated this process becomes, because you need more parity bits and the codewords get longer and longer.
For codewords with just 3 bits, for instance, you have a total of only 23, or 8, codewords. To approach capacity, however, you might need codewords with, say, 1000 bits, and therefore your decoder would need to search through an unimaginably large collection of 21000--approximately 10301--codewords. For comparison, the estimated number of atoms in the visible universe is about 1080.
The upshot was that if you set about exploiting the best existing codes as your strategy for achieving arbitrarily reliable communications at Shannon's limit, you would be doomed to failure. "The computational complexity is just astronomical," says IEEE Fellow R. Michael Tanner, a professor of electrical and computer engineering and provost at the University of Illinois at Chicago. "These codes don't have the capability to do it." How could researchers get past this barrier? It was hopeless, some actually concluded in the late 1970s.
Turbo codes solved the complexity problem by splitting it into more manageable components. Instead of a single encoder at the transmitter and a single decoder at the receiver, turbo codes use two encoders at one end and two decoders at the other [see illustration, "How Turbo Codes Work"].
Researchers had realized in the late 1960s that passing data through two encoders in series could improve the error-resistance capability of a transmission--for such a combination of encoders, the whole is more than the sum of the parts. Turbo codes employ two encoders working synergistically--not in series, but in parallel.
The turbo process starts with three copies of the data block to be transmitted. The first copy goes into one of the encoders, where a convolutional code takes the data bits and computes parity bits from them. The second copy goes to the second encoder, which contains an identical convolutional code. This second encoder gets not the original string of bits but rather a string with the bits in another order, scrambled by a system called an interleaver. This encoder then reads these scrambled data bits and computes parity bits from them. Finally, the transmitter takes the third copy of the original data and sends it, along with the two strings of parity bits, over the channel.
That rearranging of the bits in the interleaver is the key step in the whole process. Basically, this permutation brings more diversity to the codewords; in the spatial analogy, it pushes the points farther apart in space. "The role of the permutation is to introduce some random behavior in the code," says Berrou. In other words, the interleaver adds a random character to the transmitted information, much as Shannon's random codes would do.
But then turbo codes, like any other code with a huge number of codewords, would also hit the wall of computational complexity. In fact, turbo codes usually work with codewords having around a thousand bits, a fairly unwieldy number. Hopeless? Yes, if you had a single decoder at the receiver. But turbo codes use two component decoders that work together to bypass the complexity problem.
The role of each decoder is to get the data, which might have been corrupted by noise along the channel, and decide which is the more likely value, 0 or 1, for each individual bit. In a sense, deciding about the value of each bit is as if you had to guess whether it's raining or not outside. Suppose you can't look out a window and you don't hear any sounds; in this case, you basically have no clue, and you can simply flip a coin and make your guess. But what if you check the forecast and it calls for rain? Also, what if you suddenly hear thunder? These events affect your guess. Now you can do better than merely flipping a coin; you'll probably say there's a good chance that it is raining and you will take your umbrella with you.
Each turbo decoder also counts on "clues" that help it guess whether a received bit is a 0 or a 1. First, it inspects the analog signal level of the received bits. While many decoding schemes transform the received signal into either a 0 or a 1--therefore throwing away valuable information, because the analog signal has fluctuations that can tell us more about each bit--a turbo decoder transforms the signal into integers that measure how confident we can be that a bit is a 0 or a 1. In addition, the decoder looks at its parity bits, which tell it whether the received data seems intact or has errors.
The result of this analysis is essentially an informed guess for each bit. "What turbo codes do internally is to come up with bit decisions along with reliabilities that the bit decisions are correct," says David Garrett, a researcher in the wireless research laboratory at Bell Labs, part of Lucent Technologies, Murray Hill, N.J. These bit reliabilities are expressed as numbers, called log-likelihood ratios, that can vary, for instance, between -7 and +7. A ratio of +7 means the decoder is almost completely sure the bit is a 1; a -5 means the decoder thinks the bit is a 0 but is not totally convinced. (Real systems usually have larger intervals, like -127 to +127.)
Even though the signal level and parity checks are helpful clues, they are not enough. A single decoder still can't always make correct decisions on the transmitted bits and often will come up with a wrong string of bits--the decoder is lost in a universe of codewords, and the codeword it chooses as the decoded data is not always the right one. That's why a decoder alone can't do the job.
But it turns out that the reliability information of one decoder is useful to the other and vice versa, because the two strings of parity bits refer to the very same data; it's just that the bits are arranged in a different order. So the two decoders are trying to solve the same problem but looking at it from different perspectives.
The two decoders, then, can exchange reliability information in an iterative way to improve their own decoding. All they have to do, before swapping reliability strings, is arrange the strings' content in the order each decoder needs. So a bit that was strongly detected as a 1 in one decoder, for example, influences the other decoder's opinion on the corresponding bit.
In the rain analogy, imagine you see a colleague going outside carrying an umbrella. It's a valuable additional piece of information that would affect your guess. In the case of the turbo decoders, now each decoder not only has its own "opinion," it also has an "external opinion" to help it come up with a decision about each bit. "It's as if a genie had given you that information," says Gerhard Kramer, a researcher in the mathematical sciences research center at Bell Labs. This genie whispers in your ear how confident you should be about a bit's being a 1 or a 0, he says, and that helps you decode that bit.
At the heart of turbo coding is this iterative process, in which each component decoder takes advantage of the work of the other at a previous decoding step. After a certain number of iterations, typically four to 10, both decoders begin to agree on all bits. That means the decoders are not lost anymore in a universe of codewords; they have overcome the complexity barrier.
"It's a divide-and-conquer solution," says Robert J. McEliece, a professor of electrical engineering at the California Institute of Technology, in Pasadena, and an IEEE Fellow. "It broke the problem into two smaller pieces, solved the pieces, and then put the pieces back together."
Another way of thinking about the turbo decoding process is in terms of crossword puzzles, Berrou says. Imagine that Alice solved a crossword and wanted to send the solution to Bob. Over a noiseless channel, it would be enough to send the array with the words. But over a noisy channel, the letters in the array are messed up by noise. When Bob receives the crossword, many words don't make sense. To help Bob correct the errors, Alice can send him the clues for the horizontal and vertical words. This is redundant information, since the crossword is already solved, but it nevertheless helps Bob, because, as with parity bits, it imposes constraints on the words that can be put into the array. It's a problem with two dimensions: solving the rows helps to solve the columns and vice versa, like one decoder helping the other in the turbo-decoding scheme.
Flash back 11 years as an amused 42-year-old Berrou wanders the corridors of the convention center in Geneva, peeking over the shoulders of other attendees and seeing many of them trying to understand his paper. At the presentation, young Ph.D. students and a scattering of coding veterans pack the auditorium, with people standing by the door. When Berrou and Glavieux finish, many surround them to request more explanations or simply to shake their hands.
Still, convincing the skeptics that the work had no giant overlooked error took time. "Because the foundation of digital communications relied on potent mathematical considerations," Berrou recollected later, "error-correcting codes were believed to belong solely to the world of mathematics."
What led Berrou and Glavieux to their important breakthrough was not some esoteric theorem but the struggle to solve real-world problems in telecommunications. In the late 1980s, when they began to work on coding schemes, they were surprised that an important concept in electronics--feedback--was not used in digital receivers.
In amplifiers, a sample of the output signal is routinely fed back to the input to ensure stable performance. Berrou and Glavieux wondered, why shouldn't it work for coding as well?
They ran the first experiments with their novel coding scheme in 1991 using computer simulations, and when the results came out, they were stunned. "Every day I asked myself about the possible errors in the program," says Berrou.
The first thing Berrou and Glavieux did after confirming that their results were correct was to patent the invention in France, Europe, and the United States. At the time, France Télécom was the major sponsor of their work, so the French company took possession of the turbo code patents. The inventors and their institution, however, share part of the licensing profits. (Turbo codes were not patented in Asia, where they can therefore be used for free.)
It was France Télécom that asked Berrou to come up with a commercial name for the invention. He found the name when one day, watching a car race on TV, he noticed that the newly invented code used the output of the decoders to improve the decoding process, much as a turbocharger uses its exhaust to force air into the engine and boost combustion. Voilà: "turbo codes"!
Turbo codes are already in use in Japan, where they have been incorporated into the standards for third-generation mobile phone systems, known officially as the Universal Mobile Telecommunications System (UMTS). Turbo codes are used for pictures, video, and mail transmissions, says Hirohito Suda, director of the Radio Signal Processing Laboratory at NTT DoCoMo, in Yokosuka, Japan. For voice transmission, however, convolutional codes are used, because their decoding delays are smaller than those of turbo codes.
In fact, the decoding delay--the time it takes to decode the data--is a major drawback to turbo codes. The several iterations required by turbo decoding make the delay unacceptable for real-time voice communications and other applications that require instant data processing, like hard disk storage and optical transmission.

For systems that can tolerate decoding delays, like deep-space communications, turbo codes have become an attractive option. In fact, last September, the European Space Agency, based in Paris, France, launched SMART-1, the first probe to go into space with data transmission powered by turbo codes. ESA will also use the codes on other missions, such as Rosetta, scheduled for launch early this year to rendezvous with a comet. The National Aeronautics and Space Administration, in Washington, D.C., is also planning missions that will depend on turbo codes to boost reliable communications. "The first missions that will be using these codes will be Mars Reconnaissance Orbiter and Messenger," says Fabrizio Pollara, deputy manager of the communications systems and research section at NASA's Jet Propulsion Laboratory in Pasadena, Calif.
Digital audio broadcasting, which provides CD-quality radio programs, and satellite links, such as the new Global Area Network of Inmarsat Ltd., in London, are both also about to incorporate turbo codes into their systems.
And beyond error correction, turbo codes--or the so-called turbo principle--are also helping engineers solve a number of communications problems. "The turbo-coding idea sparked lots of other ideas," says Lajos Hanzo, a professor in the School of Electronics and Computer Science at the University of Southampton, United Kingdom, and an IEEE Fellow. One example is in trying to mitigate the effects of multipath propagation--that is, signal distortion that occurs when you receive multiple replicas of a signal that bounced off different surfaces. Turbo codes may eventually help portable devices solve this major limitation of mobile telephony.
Finally, another major impact of turbo codes has been to make researchers realize that other capacity-approaching codes existed. In fact, an alternative that has been given a new lease on life is low-density parity check (LDPC) codes, invented in the early 1960s by Robert Gallager at MIT but largely forgotten since then. "In the 1960s and 1970s, there was a very good reason why nobody paid any attention to LDPC codes," says MIT's Forney. "They were clearly far too complicated for the technology of the time."
Like turbo codes, LDPC attains capacity by means of an iterative decoding process, but these codes are considerably different from turbo codes. Now researchers have implemented LDPC codes so that they actually outperform turbo codes and get even closer to the Shannon limit. Indeed, they might prove a serious competitor to turbo codes, especially for next-generation wireless network standards, like IEEE 802.11 and IEEE 802.16. "LDPC codes are using many of the same general ideas [as turbo codes]," says Caltech's McEliece. "But in certain ways, they are even easier to analyze and easier to implement." Another advantage, perhaps the biggest of all, is that the LDPC patents have expired, so companies can use them without having to pay for intellectual-property rights.
Turbo codes put an end to a search that lasted for more than 40 years. "It's remarkable, because there's this revolution, and nowadays if you can't get close to Shannon capacity, what's wrong with you?" says the University of Illinois's Tanner. "Anybody can get close to the Shannon capacity, but let's talk about how much faster your code goes...and if you are 0.1 dB from Shannon or 0.001 dB."
It was the insight and naiveté typical of outsiders that helped Berrou and Glavieux realize what the coding theory community was missing. "Turbo codes are the result of an empirical, painstaking construction of a global coding/decoding scheme, using existing bricks that had never been put together in this way before," they wrote a few years ago.
Berrou says their work is proof that it is not always necessary to know about theoretical limits to be able to reach them. "To recall a famous joke, at least in France," he says, "the simpleton didn't know the task was impossible, so he did it."
The 2004 International Conference on Communications, to be held in Paris on 2024 June, will include several sessions on turbo codes. Seehttp://www.icc2004.org/.
"What a Wonderful Turbo World," an electronic book by Adrian Barbulescu, contains a detailed analysis of turbo codes and source code in C for simulations. Seehttp://people.myoffice.net.au/%7Eabarbulescu/.
For a discussion of implementation issues and a presentation of a real-life prototype, seeTurbo Codes: Desirable and Designable , by A. Giulietti, B. Bougard, and L. Van der Perre (Kluwer Academic, Dordrecht, the Netherlands, 2004).
Erico Guizzo is the digital product manager at IEEE Spectrum. An IEEE Member, he is an electrical engineer by training and has a master's degree in science writing from MIT.
The NextGen system will simplify and streamline business processes
My personal mission as an IEEE volunteer has been to work to make the institute the premier organization for technical professionals to engage with. My objectives focus on implementing practical measures to inspire individuals to call IEEE their lifelong professional home. As IEEE president and a longtime volunteer, I am committed to strengthening IEEE through improvements to its business process framework and to continuing my efforts to clarify financial reporting and promote fiscal responsibility.
We, the IEEE Board of Directors, are the fiduciaries responsible for steering the organization toward a sustainable future by adopting sound, ethical, and legal governance and financial management policies, as well as ensuring IEEE has adequate resources to advance its mission and vision.
As a strong believer in and a steward of this mission and vision, it is both my great honor and great duty to help guide the organization, supporting the work of IEEE around the world, and directing our policies, strategies, and governance to advance IEEE's mission and impact.
To that end, IEEE acted on the request from our volunteers for an improved financial and contract system to simplify, streamline, and save time that also allows for improved business insights, workflow, and decision-making. In collaboration with our volunteer leaders and professional staff, IEEE invested in the tools and processes to create a better volunteer experience. This effort is the NextGen Financial System.
IMPROVED PROCESSES
The development of this new financial support system began in 2020. IEEE's current systems were approaching the end of their operational life. Additionally, IEEE's financial operations had grown in scale and complexity, and the current systems could not provide the level of timeliness, detail, and flexibility that volunteers expected. The systems required volunteers and staff to create manual workarounds to provide the data they needed to gain the necessary insight into IEEE's business activities. This was not ideal.
NextGen Financials Cloud
This improved financial process saves time while providing greater visibility in real time. This streamlined approach for financial reporting makes day-to-day activities easier for volunteers, with dashboards to review your financials and with more details. It's the one source that you can go to, rather than dealing with emails, phone calls, and waiting on others.
NextGen Banking
This replaces Concentration Banking/CBRS and provides a streamlined approach with a one-stop resource for your global banking needs. Its integrated self-service options provide greater flexibility and ease-of-use.
NextGen Expense Reimbursement
This efficient expense-reporting process offers a fast, easy, efficient, and automated reimbursement experience for IEEE volunteers. It is the new name for IEEE's volunteer expense reimbursement tool, Concur.
NextGen Contracts
IEEE has moved from a manual contract-review process to an automated process. Users can view a dashboard that shows the status of the contract, versus looking for an email or notes from a phone conversation. Everything is at your fingertips, with contract life-cycle visibility and the ability for real-time collaboration.
In May we rolled out NextGen Financials. This cloud-based management system supports comprehensive project-based financials and provides support for automating contracts and purchase orders as well as tracking and reporting costs and expenses. It enables both volunteer leaders and staff to better manage their budgets and track spending across the organization, where everything from expense reporting to end-to-end contract management is online and cost accounting is clear.
The system is available for those who are authorized to process financials and contracts, including IEEE Technical Activities society and council leadership; geographic unit treasurers at the region, council, section, and chapter levels; conference organizers; and IEEE Standards Association officers.
IEEE understands this is a change for our volunteers and it will take time to adapt to the new platform. Training and educational resources have been made available throughout the transition period. For more information, please visit theNextGen website.
As a strong investment in IEEE's future, NextGen upgrades our financial systems and advances the way we manage our business activities. By streamlining and simplifying existing processes with NextGen, we have become more nimble as an organization.
With greater visibility and governance over IEEE processes, we can make quicker, more informed decisions. And we are better positioned to manage day-to-day activities with a greater focus on the mission of IEEE.
Thank you for your continued support. Please share your thoughts with me atpresident@ieee.org.
EV range anxiety hangs in the balance
Lawrence Ulrich is an award-winning auto writer and former chief auto critic at The New York Times and The Detroit Free Press.
During Tesla's Battery Day 2020, Drew Baglino, SVP Powertrain and Energy Engineering, and Elon Musk, CEO, introduced the 4680 battery.
On July 26, in a rare glimpse into progress on his company's purported game-changer battery, Tesla CEO Elon Musk was in true Muskian form, raising hopes and tempering expectations in onecryptic swoop:
We have successfully validated performance and lifetime of our 4680 cells produced at our Kato facility in California. We are nearing the end of manufacturing validation at Kato: field quality and yield are at viable levels and our focus is now on improving the 10% of manufacturing processes that currently bottleneck production output. While substantial progress has been made, we still have work ahead of us before we can achieve volume production. Internal crash testing of our structural pack architecture with a single-piece front casting has been successful.
Any whisper of production bottlenecks at Tesla inevitably recalls the Model 3'stroubled launch in 2017 and 2018, the one that had Musk sleeping in his office. Yet that make-or-break sedan ultimately made Tesla the world leader in EVs, and one of its most-valued companies.
The enlarged,cylindrical 4680 cell, which Tesla first teased at itsBattery Daylast September, brings its own sky-high hopes and challenges: If Tesla can pull off in-house, vertically integrated battery manufacturing, and the cell performs as advertised, the 4680 could fuel Musk's dreams to build millions of EVs a year around the world. Tesla's goals include boosting driving range bymore than 50 percent—16 percent of that due to the 4680's newfound punch—while halving battery costs and bringing a $25,000 Tesla to showrooms. Tesla continues to dominate EV sales in America, but its seemingly insurmountable lead in driving range is under assault. The Arizona-built Lucid Air sedan, the work of Musk's former Model S chief engineer, has demonstrated it cantravel up to 517 miles, a lofty record for any EV. Tesla's best, the Model S Long Range, is EPA-rated for 405 miles.
"The Lucid Air is the first car to show range that's not just competitive (with Tesla), but better, an astonishing achievement," saidVenkat Viswanathan, battery researcher and associate professor of mechanical engineering at Carnegie Mellon University. "It shows it's no longer a one-horse race."
To gallop back in front, a 50-percent range leap for a vehicle like the Model S would let it top 600 miles, a diesel-like stamina that seemed unimaginable a few years ago. So much for "range anxiety." Sandy Munro, the Detroit-area engineer who has gainedYouTube fame for his reverse-engineered teardowns and analyses of EVs, is among the experts convinced that Tesla will pull it off.
"For the cell itself, no question, it will kick the daylights out of everybody," Munro says.
The 4680 could fuel Musk's dreams to build millions of EVs a year around the world
That kick begins with the 4680's form factor and what surrounds it, more than what ends up inside it, Munro and other experts toldIEEE Spectrum. Where Battery Day trumpeted the new cell as having five times the energy density and six times the power of its Panasonic-built 2170 cells, Musk conveniently failed to mention the larger 4680 has nearly 5.5 times the volume, simply due to larger dimensions: 46-by-80 millimeters, versus 21-by-70 millimeters. Yet this "bigger can" brings big benefits. Each jelly-roll cell packs in more active battery material, and less wasted space in metal casing. A so-called "structural battery pack" (also called "cell-to-pack" construction), touted as a Tesla innovation, is in fact already a staple of several EVs, especially in China—including General Motors' red-hot,roughly $5,000 Wuling Mini. That saves more space by trading multiple module cases for a more-streamlined pallet of cells wired in parallel.
"It's basically a giant box without dividers," Munro says.
And where current Tesla packs feature what Munro calls "a crappy cooling design" with thermal channels between cells—superfluous, because most heat is concentrated at cell ends—Munro says 4680 cells will rest atop a liquid-cooled thermal plate that's become a staple of EVs from GM, Ford, Volkswagen, Porsche, or Rivian.
"When we first tore apart the Model 3, we just couldn't figure that out," Munro says of the previous method.
Munro further estimates Tesla's redesigned pack, including adhesive bonds between cells and modern welding techniques, will reduce steel use by 30 to 40 percent. Stamped grid plates on top will bring power back to terminals. Munro's team mocked up the projected pack, including cut and painted wooden dowels to mimic the beefy new cell. That pack swallowed 960 larger cells, versus 4,416 cells for the 2170 variety. Totaling potential gains, Munro estimates Tesla could stuff 130 kWh of new cells into the same-sized pack that houses just 72 kWh in the Model 3.
His analysis suggests a 4680 cell with roughly 9,000 mAH, versus 5,000 mAH for the 2170. Munro cautions these aren't definitive estimates; Tesla has yet to show its 4680 cell in physical form, or reveal its chemistry or specs. Still, experts say a long list of innovations could widen Tesla's already significant lead in driving range and efficiency versus the global giants.
"The holistic approach to EV and battery engineering is Tesla's key advantage," Munro says.
That chain-of-gains approach includes a "tabless" cell design, which some experts see as the 4680'sbest physical innovation. Eliminating traditional tabs that connect a cathode and anode to battery terminals simplifies manufacture, saves space and reduces ohmic resistance, a major hurdle in safely charging a large-format battery.

"You actually have a shorter path length in a large tabless cell than you have in a smaller cell with tabs," Muskexplained, a year ago at Tesla's Battery Day.
Musk actually cites Tesla's growing manufacturing expertise in batteries—eliminating steps, streamlining processes, slashing costs—as its true competitive edge. That vision includes not just 4680 factories adjacent to car plants near Austin and Berlin but also chemical plants to produce cathodes and lithium hydroxide, according to Simon Moores, managing director of Benchmark Mineral Intelligence. In one example, Tesla plans to use raw metallurgical silicon to boost its content in cells, using a scalable elastic polymer coating to conduct ions,boosting range another 20 percent and reducing pack costs by five percent, according to Drew Baglino, its SVP of engineering.
Two weeks before Battery Day, Tesla bought three patent applications for just $3 from Springpower International, an obscure Canadian company that Tesla now seems to haveacquired outright.One Springpower invention, described by Musk and Baglino at Battery Day, uses recirculation to skip the step of treating contaminated water in cathode production—up to 4,000 gallons of effluent for every ton of cathode material. The same process might ease battery recycling and grid-storage solutions.
"It's insanely complicated, like digging a ditch, filling it in and digging the ditch again," Musk said. "We looked at the entire value chain and said, 'How can we make this as simple as possible'"?
Meanwhile, EV demand threatens to outstrip battery supply, kneecapping EV production and adoption, and delaying a climate-critical switch from fossil fuels to electricity
Anelectrode-coating process using dry film, linked to another Tesla supplier setting up shop in Texas, would eliminate toxic, expensive solvents used in aqueous coatings.
It all sounds amazing, on paper. Gary Koenig, a battery materials expert and associate professor of chemical engineering at the University of Virginia, cautioned that getting the cells into mass production at cost is another story.
"Gettings these things to be scalable is really hard," Koenig says of Tesla's 10-gigawatt pilot plant. "And you have to get to high volume to bring those costs down, and that's not easy to do."
Viswanathan agrees that Battery Day blended tangible gains with Musk's familiar visionary spitballing, making it all hard to parse.
"It's difficult to separate the signal from the noise in these Tesla presentations," Viswanathan says. "But there's always signal; the question is how much."
"Still, I have no doubt they will be able to produce the cell at volume," he continued. "More than any automaker, they have the talent to do it. The question really is how quickly, at what price point, safety levels and manufacturing defects."
As for chemistry, Viswwanathan said a nickel-rich, low-cobalt design will surely power pricier, longer-range models, to take on rivals including Lucid and General Motors—the latter with its new nickel-intensiveUltium cells. Regarding Tesla's vaguely stated "diversified cathode" strategy, Viswanathan said it's possible that Tesla would build some lithium-iron phosphate (LFP) batteries in the 4680 format. That chemistry, once dismissed for its puny energy density, is suddenly prized for low cost, long life and reassuring safety; especially for entry-priced models. Tesla has already begun selling Model 3s in China and Europe with CATL's prismatic LFP cells. And on August 27, Tesla dangled LFP-powered versions of the Model 3 Standard Range Plus to Americans awaiting backlogged deliveries. Tesla contacted some reservation holders, offering them a car as early as September if they agreed to accept a Model 3 SR+ with an LFP pack with an estimated 253-mile range, 10 fewer miles than versions with 2170 batteries.
Musk himself stated that LFP may make up 50 percent of all lithium-ion cells in cars, versus fewer than 10 percent today. Viswanathan notes that, by packing in more active battery material via 4680 cells and structural packs, LFP could enjoy a serious boost in driving range at an ultra-affordable cost.
So, when might the world actually drive Teslas powered by the 4680? The question takes on more urgency, considering Tesla's frustrating record of tardiness. Inlate April, Musk said the battery was 12 months away from production, if not 18 months. Existing suppliers, including Panasonic, CATL, LG Energy Solution and SK Innovation, may well deliver the 4680 before Tesla itself. (Panasonic's new chief executive confirmed his company will make a large investment to build 4680 cells if they prove viable).
After months of playing coy, Tesla finally confirmed in August that the Cybertruck's Texas rollout would be pushed back to 2022, due to battery shortages. That massively scaled pickup is one power-hungry candidate for the 4680 cell, along with the also-delayed commercial Semi.

Cybertruck,unveiled to great fanfare in 2019, will now be beaten to market by this year's GMC Hummer pickup, the Rivian R1T and perhaps even Ford's F-150 Lightning in 2022. Model Y SUV's fromGiga Texas andGiga Berlin are also in line for the 4680, built at those factories' adjacent battery-production lines. Tesla continues to hedge its bets, expanding existing contracts with Panasonic and other suppliers.
Tesla's Battery Day targets, meanwhile, include a seemingly quixotic goal to ramp up battery production (including from supplier partners) to 3 terawatts by 2030. That's a nearly100-foldjump from Tesla's current capacity in Nevada. It would be enough to supply 20 million annual Teslas (or rival automakers who buy Tesla batteries), versus roughly 500,000 global Tesla sales in 2020.
Yet that ambition is driven by harsh reality: A global tsunami of EV demand threatens to outstrip battery supply, kneecapping EV production and adoption, and delaying a climate-critical switch from fossil fuels to electricity. Energy research companyWood Mackenzie estimates EVs will grow from 10 million on roads today, to 100 million in 2030 and 400 million after 2040. EVs would hog 90 percent of battery demand over next two decades. Even at Wood Mackenzie's conservative estimate,battery demand would be about eight times as much as today's factories can deliver. To Tesla, it's all about a Marshall-Plan level campaign to build not just battery factories, but factories that churn out more batteries, more affordably—for its own global domination, and to achieve Musk's more-altruistic goals.
"They've been constrained for years on batteries," Viswanathan says. "They see this as a critical piece of their growth story, to meet the volume demand that they and the industry will have in coming years."
Systems Engineers face a major dilemma: More than 50% of project defects are caused by poorly written requirements. It's important to identify problematic language early on, before it develops into late-stage rework, cost-overruns, and recalls. Learn how to identify risks, errors and ambiguities in requirements before they cripple your project.