Movatterモバイル変換


[0]ホーム

URL:


Resurrection HomePrevious issueNext issueView Original Cover

Computer

The Bulletin of the Computer Conservation Society

ISSN 0958 - 7403

Issue Number 5

Spring 1993

Contents

Editorial Nicholas Enticknap, Editor
Guest Opinion Maurice Wilkes
Society News Tony Sale, Secretary
The Williams Tube Revisited Tony Sale
Andrew Booth's Computers at Birkbeck College Andrew Collin
The Origins of Packet Switching Derek Barber
Altair and After: the Original PC Revolution Robin Shirley
Letters to the Editor  
Working Party Reports  
Forthcoming Events  
Committee of the Society  
Aims and Objectives  

TopPreviousNext

Editorial

Nicholas Enticknap, Editor


Welcome to the latest issue ofResurrection, which we arepleased to say has taken a much shorter time to produce thanits predecessor did. We have taken various steps to speed upthe production process, and these are now bearing fruit. Thenext issue is already in preparation as I write.

In the meantime, there is much catching up to do. This issueis the largest to date, and includes edited versions of fourtalks which took place in the winter and spring of 1991-92.They are varied both chronologically and technically, spanningcomputing developments from the Williams tube of the fortiesto the pioneer personal computers of the seventies, andencompassing some interesting fifties developments as well asour first telecommunications feature, on the evolution ofpacket-switching.

In this issue we have included a couple of letters fromreaders who were spurred into epistolary activity by thearticles in our last issue. Letters are always welcome,especially from readers who are rarely able to attend ourScience Museum gatherings.

The Society has been very active since the last issue, anddetails of some of these activities are to be found in theSecretary's Society News piece and the working party reports.The most notable new developments are the formation of a sixthworking party to restore the Science Museum's unique Elliott401, and the creation of a so far unnamed branch of theSociety in the Manchester area.

Work has also been proceeding on the plan to acquire BletchleyPark with a view to establishing a museum of computing andcryptology on the site. The Bletchley Park Appeal, launched inJuly, has proved a great success, with good press and TVcoverage.

The media publicity was influential in persuading BritishTelecom to withdraw all their planning applications for thedemolition of the wartime buildings in the Park. Negotiationshave now started between the Bletchley Park Trust (of whichour Secretary is now a member) and both Property Holdings andBT about the purchase of the Park.

One activity not reported in this issue is the diligentarchiving work run by Harold Gearing with the assistance oftwo members of the Science Museum staff, Susan Julian-Ottieand Helen Kingsley. We hope to provide a detailed account ofthis work in a future issue.


TopPreviousNext

Guest Opinion

Maurice Wilkes


It gives me great pleasure to be invited to provide a guesteditorial forResurrection.

As the first President of the British Computer Society, I findit particularly pleasing that the BCS has, together with theScience Museum, formed the Computer Conservation Society todocument and illustrate with working computers the history ofthe now recognised profession of computing.

The explosive expansion of computing into all walks of lifehas been quite breathtaking. Such a rapid expansion has leftmuch of the historical aspects poorly recorded, andunfortunately many important artefacts have been destroyedthrough ignorance of their true historic worth.

The Computer Conservation Society offers a mechanism wherebythis can be and is being remedied. It will however onlysucceed if the people who worked on hardware, software andsystems design look out all their old documents and records,including program tapes, and place them in the safe keeping ofthe CCS for proper archiving and preservation. We,whodesigned, built, maintained and programmed early computers,owe it to ourselves and future generations to see that ourendeavours are properly recorded and archived.

The Computer Conservation Society must also redouble itsefforts to involve more young people in its activities. Herethe CCS's pioneering work on simulators has an important partto play. Many young people relish the challenge of producingcomplex interactive graphics on a modern PC. That thesegraphics emulate an early computer is an added challenge. Itprovides a bridge over which young computer professionals cancross into an alien world of thermionic valves, high voltages,early transistors and esoteric memory systems.

The concept of working with old computers owes much to thevision of Doron Swade, the Curator of Computing at the ScienceMuseum, and to the support received from Dr Cossons, Directorof the Science Museum. The partnership between a museum and aprofessional body has worked extremely well and may offer auseful model for other professions. May the ComputerConservation Society continue its pioneering work for manyyears to come.


TopPreviousNext

Society News

Tony Sale, Secretary


A major development has occurred since the last issue whichhas made the future of the Society look rather more secure.

The Science Museum has instigated its Computer Conservationand Archiving Project, which substantially strengthens therelationship between the Society and the Museum. This is theinitiative of Suzanne Keene, who joined the Museum as Head ofCollections Services in the middle of 1992.

Essentially what has happened is that the Museum is now usingthe activity of the Society as a model for the procedures tobe followed with similar projects, whether involving computersor other artefacts. In return, the Museum is providing greatersupport to the Society, both financially and in the provisionof resources.

This has helped, for instance, with the archiving projectbeing managed by Harold Gearing. Harold is now being assistedby Susan Julian-Ottie, who joined the Museum staff at thebeginning of November. This means that the archiving projectcan make quicker progress - essential as the amount ofmaterial in our archive is increasing daily as peoplegenerously donate us their own material.

The most obvious sign of the new relationship is the decisionto restore the Elliott 401. This machine has been waitingpatiently in the corner of our working area for a couple ofyears, but work has been unable to begin until recently. Thefirst meeting of the new Elliott 401 Working Party was held on22 September, under the chairmanship of Chris Burton.

The emphasis is very much on conservation in addition torestoration. In this respect the Working Party is beingassisted by Helen Kingsley, another to have joined the Museumstaff over the past six months.

The Society is expanding in other directions, too. We havebeen approached by the Manchester Museum of Science andIndustry about a Pegasus that they own, and the outcome isthat a north-western branch is being set up to organise arestoration project for that machine. It is, incidentally,believed to be the first Pegasus ever delivered.

We hope that this branch will also provide a focal point formembers in the north of England who are usually unable to cometo our Science Museum meetings and seminars.


TopPreviousNext

The Williams Tube Revisited

Tony Sale


The Williams tube was invented by Freddie Williams in 1946. During the next two years Tom Kilburn refined the technique, and used it to produce the world's first stored program computer - the date was 21 June 1948. This article describes the development of this interesting memory technology which was used in all the world's most powerful computers during the period 1949-54.

Memory has been a critical consideration for computerdesigners since the beginning. Babbage, in his proposal forthe Analytical Engine, was very ambitious, talking about 100memories of 50 digits each - though I think the biggest designthat he actually produced was 16 x 50. He had certainlyappreciated the need for memory, though, as did Vannevar Bush.He produced the designs in about 1936 for what was an exactequivalent of a modern VDU; and he put into that a very largememory. That's an important point in the development of ideasfor what you needed in order to make a computer.

ENIAC had 20 memory locations, I think of 20 digits each - allvalve memory. The original EDSAC proposal by Von Neumann andothers was for 4000 words of 40 bits; Turing's ACE proposalwas for 256 x 30 bits and Zuse produced a mechanical memory of1024 x 16 bits.

So all those pioneers had appreciated the importance ofmemory, and there were lots of attempts to produce differentsorts of memory. I've classified their broad characteristics.

The first is quantisation - that is, the number of discretelevels that are stored in any one location. Babbage wasworking on a decimal notation and Zuse was working on a binarynotation. Binary is better as the distance between adjacentpoints can be kept further apart.

You can then have a sort of memory which is position dependent- this is what the electrostatic memories are. Positiondependent can either mean in time or space. An example of atime-dependent memory is an in-flight memory where you arestoring things by the fact that it takes a long time to goround a cycle, so that storage occurs during the time thatit's in flight in the medium. A space-dependent memory couldbe an individual spot on a screen or an individual ferritecore.

Then you've got the problem of read-out - whether it'sdestructive or non-destructive. Babbage's read-out on hiswheels was a destructive read-out (he had to restore them),but the Scheutz engine had a non-destructive read-out. Thedownside of that is that you had to differentiate between oneof 10 quite close levels in order to find out which one it wasactually set at, and that made it slightly unreliable.

The roots of electrostatic memory are in radar and the needfor moving target identification. The problem with radar, asit got more and more powerful and got more and more echoes,was that you got more and more clutter. The difficulty was tosort out the moving target from the fixed background clutter.Quite early on people said: well why don't you just store thebackground clutter because it's not changing very much, andcancel it out on the next scan?

So there was an impetus to have a storage system which wouldhold all the information on a radar trace for one scan periodand cancel out the fixed echoes. You should then get themoving one showing up.

One of the first ways tried was to use iconoscopes, because TVwas well developed by the outbreak of the war and iconoscopeswere easily available. The idea was to write a pattern andthen compare the two patterns by reading them back again. That actually worked for MIT but when it was tried for storagefor digital computers there was a problem.

The basic problem with the iconoscope or any TV artefact wasthat you could tolerate a high amount of noise because the eyedoes not notice it due to integration over time. So what lookslike a nice static picture can actually be very noisy and havedropouts and bits missing.

So although iconoscopes gave perfectly good pictures forstudio presentation of video information, when you get down tosaying `what's on that little bit there?' the answer was thatin fact it fluctuated an enormous amount, and depended on allsorts of factors which they couldn't control accurately. Soalthough iconoscopes were OK for doing a broad brush thinglike TV or moving target identification in radar, they werenever successful as a storage system for information for acomputer.

Freddie Williams was working on radar at TRE, and he went tothe States both in 1945 and 1946 to see what they were doingat MIT on moving target identification. He saw all the workthey were doing and he came back to Malvern and set up a twotube system. But while he was in the States he also worked onthe Waveforms book.

In that book is a description of the moving targetcancellation using a cathode ray tube. By this time it hadbeen realised that you could store charges on the insidephosphor face of a CRT and by suitably modulating the velocityof the beam you could read out information.

When Williams came back to Malvern he then started setting upthe two tube experiment, which involved reading from one tube,storing on another tube, then reading back from the second, socycling round on two tubes. While he was doing this hediscovered the anticipation effect which I'll explain in amoment. That led to continuous regeneration and to his firstpatents in December 1946.

Williams wasn't the only person working on electrostaticstorage. It was realised that electrostatic storage had thepotential for very fast storage as you could switch anelectron beam very quickly from one point to another on ascreen. It had the potential of random access anywhere on ascreen to pick up what had been stored there previously. Thatwas the reason it was felt to be so important; there were noother technologies which at that time could match it forpotential speed.

So how did the Williams tube actually work? What FreddieWilliams discovered was the anticipation effect; this is aneffect which relies on secondary emission from the insidephosphor face of the CRT. It was found that for acceleratorvoltages of 500 to 2000, there was a peak at 1500 volts wherethe amount of secondary emission exceeded the primary currentof the beam. There was an amplification effect, which waseventually used in photomultipliers and other things.

But the whole surface of the phosphor stabilises at about -20volts with respect to the final anode (the aquadag round theCRT). When a beam strikes the CRT it charges it +3 volts orthereabouts with respect to the mean level, so that sits at -17 volts. The secondary electrons are all part of thestabilising cloud, which stabilises the voltage on the insideof the CRT. Williams found that making a spot move across thescreen produced this 3 volt charge but as it moved to the nextpoint the secondary emission from the next point filled inthat charge again, so it neutralised it.

So the net effect is, if it's moving at a certain range of velocities, it will be stabilised, but at the end the beam is switched off. At that point there is a blank at the next part of the sawtooth: the last hole dug is never filled in again for there is no further spot. Therefore that leaves a charge at the end of the trench which is filled in by the secondary emission.

If you rescan it with another beam (which is on all the timein a plate on the front of the CRT) you get a waveform. Thisinduced charge is caused by the incident beam striking the +3volts hill rather than the plateau of the rest; and thatinduces the charge in the plate on the outside. Because thatoccurs before the beam is switched off, you can actually usethat pulse to switch off the beam which is now travellingwhich was an inspection beam.

So now you can regenerate because now you can actuallyanticipate that it's going to change there: this time roundyou switch it off and it regenerates. It's called theanticipation effect because the presence of that signalanticipates that the beam is about to be switched off.

There were various schemes tried; the original one was thedash dot system, but the difficulty was that that was rathersensitive to flaws and defects in the face of the CRT. Itmight work in the lab but once you go into production and talkabout 2048 bits on the face of the tube, every one of thosehas got to work reliably.

Tom Kilburn found that there were problems with themanufacture of CRTs. In particular what happens is the aquadagcoating is squirted on the inside of the tube neck to make thefinal anode, and very often some of that, in the laterprocessing of the phosphor, would detach and sprinkle over thephosphor. So they had cases where you could move the matrixjust slightly and it wouldn't work, then move it again and itwould.This was very critical for the larger number of bitsthat they were trying to store later on in using the tube.

One of the ideas - I'm not sure who produced it - was to go tothe focus-defocus method. You wrote a defocussed large areawhich dug out the charge, and then you inspected it with afine spot to see whether in fact it had been dug out or not.So you were looking at either a diffuse spot or a spot whichhad been previously put there. The difference between thosetwo was much greater, and much less sensitive to individualsmall flaws in the screen. So I believe that was extensivelyused in the latter days of the memory system.

There was another version of that which was the dot-circleone. I think the dot-circle method was used more in the Statesthan the UK.

Tom Kilburn and Freddie Williams got a lot of flack frompeople because there was no real theoretical background forthe Williams tube. It just bloody well worked! This alwaysannoyed people because they couldn't prove it didn't work.

In fact in, I think, 1954 one of Tom's PhD students actuallydid proper research on how it worked and what was behind itall. The experiments he did then were with the double dotmethod. What that means is: you fire a dot and then if youwant to store a 1 you displace from that original position andfire another dot, and the debris from the second one fills inthe first one. That is a more controllable situation from thepoint of view of measuring exactly what is going on than thedot-dash system.

The Williams tube was used extensively. It was actually usedin the fastest computers in the world in that period for aboutfive years from 1949-54, before ferrite cores arrived. ILLIACwas the fastest computer in its day, and that had 40 Williamstubes; it was a parallel machine 40 bits wide, one tube perbit, 1K words.

The storage system was used in the Manchester prototype andthe Mk 1, and also with parallel architecture in TREAC atMalvern. In the States the important systems were the IASPrincetown machine, the IBM 701 and ILLIAC.

The sort of cycle time that Tom Kilburn was using was usuallyaround10 - 12 microseconds. His was a serial architecture inthat he read the bits serially across a tube, so you selecteda word and then read the bits out of a word. I think the IBMone got down to about 5 - 6 microseconds, but it was typicallytwice that.

There were always tremendous arguments between the Manchesterpeople, the Cambridge people and the NPL people as to whichwas the fastest machine. Although Tom Kilburn could get therequickly, he then had to read out at 12 microseconds a bit. Theother people who were doing on-the-fly ones had to wait a longtime for the information to come round the tube, although theclock speed of Pilot ACE was a lot faster at 1MHz.

Manchester received royalties from a large number oforganisations through licensing of the technology, one ofwhich was IBM. Because there were a lot of royalties, thatstarted lawsuits.

There was a claim by Eckert which implied that he had inventedthe electrostatic storage system. To stop that the NRDCmounted an interference action in the American courts. I haveobtained from the National Archive in Manchester a copy of thebriefing paper to the British Council (representing NRDC) onit. It is fascinating reading as it's all there, laid out withall the affidavits and all the dates. I'm glad to say thatNRDC won the case, and it was deemed to be Williams'invention.

Because of that IBM licensed the Williams technology and theyput it into the 701. I'm not sure how many of the 701s hadWilliams tubes because there was a transition period whenferrite cores came in, and later 701s were shipped withferrite cores, but certainly some of the 701s and 702s wentout with Williams tube memories in.

That led to some interesting research by IBM into theengineering of the Williams tube storage system. One of thethings that people were worried about - particularly the IBMpeople - was a thing called the read-around ratio.

Because the charge leaked away slowly within the face of thecathode ray tube and gradually stabilised back to its -20volts, you had to refresh the data. The refresh time was onlya few milliseconds, but the decay time was about half asecond.

It also meant that if you didn't revisit a given site within acertain time (the read-around time), because you were readingother bits on the tube, the sort of background hash generatedfrom that would fill in the ones that you hadn't visited. Sothe fact that you were reading information from a tube andwriting to it at various places caused a degradation of placesyou hadn't visited.

So on the 701 they used an interleaved address read and writemethod so as to avoid going to adjacent places on reading andwriting. This technique was also used on core stores to tryand reduce the amount of crosstalk.

The papers I've read about the 701 indicate that they werevery worried about this degradation of the information on thetube. They were also worried about the uniformity of theelectrostatic effect across the face of the tube.

On the first CRTs they suffered from all sorts ofimperfections,and they found it was pollen dust - they wereout in the country somewhere and they were making these tubesin a factory where there was a high pollen environment. It wasbasically all quality control. It didn't matter on your radarscan, which was what they were mostly used for initially, oron TV tubes; but when you were worried about an individual bitstanding on its own, you needed a different level of qualitycontrol.

This article is an edited version of the talk given by theauthor to the Society at the Science Museum on 23 January1992. Tony Sale is Secretary of the Computer ConservationSociety.


TopPreviousNext

Andrew Booth's Computers at Birkbeck College

Andrew Collin


A D Booth is widely recognised as a pioneer of computing: he originated the Booth algorithm, and he was one of the first producers of computers in this country, or indeed the world. This talk does not attempt to add to what is already widely known about his career and scientific work, but is confined to personal reminiscences.

Half a lifetime has passed since 1957. To set the scene,imagine a world where World War 2 was a recent memory; Stalinhad died only four years before; the Cold War was at itsheight; Britain had just been defeated in Egypt; it cost just2½d (1p) to send a letter; steam trains still ran; there wasno colour TV; and anyone with a qualification had nodifficulty in getting a job.

In 1957 most people had never heard of computers; many ofthose who had believed the word referred to a calculatingclerk, or perhaps a mechanical gadget to help you shoot downan aeroplane.

In the final year of my engineering degree at Oxford, I cameacross one of Andrew Booth's early books, `Electronic DigitalCalculators'. I was intrigued by his ideas and arranged tovisit him at Birkbeck College, where I was taken on as aresearch student.

At this time Booth was head of the sub-department of NumericalAutomation, a section of the Maths department of Birkbeck. Thesub-department was housed in an old wartime emergency watertank, which you approached by rickety wooden steps leadingdown from the street behind the College.

The sub-department had a small population - Booth, his wifeKathleen who developed many early ideas on programming, asecretary, and a few research students working in such diversfields as linguistics, character recognition, crystallographyand the behaviour of thin films. Booth himself had a countryhouse at Fenny Compton, where he did much of his work. Hetended to be in College only about three days a week.

The sub-department was equipped with two computers of Booth'sown design and manufacture - the APE(X)C and the machine Imade most use of, the MAC 1. Both machines shared the samearchitecture.

The MAC 1 was built into a frame about the shape and size ofan upright piano. The base was full of heavy power supplieswith large transformers. The `keyboard' was a sloping panelwith neon lamps to show the internal state of the machine and

a row of buttons for data input. The logic was built on tovertical panels, with the valves out of sight facing backwardand the other components to the front so as to be immediatelyaccessible.

The machine weighed about 400 lbs and used about 230thermionic valves, mostly double triodes.

From the programmer's point of view the MAC 1 had a simplicitywhich is only beginning to be approached by the latest riscarchitectures. The cpu had two 32-bit registers, called theaccumulator and the register.

The main memory was a rotating magnetic drum with 32 tracks.Each track held 32 words of 32 bits each. Ten bits were enoughto address any word in the memory.

Input was through a five hole electromechanical paper tapereader of Booth's own design, and output was through astandard five hole 10 cps paper tape punch, as used in thosedays for ticker tape.

The machine used 32-bit instructions, and had a two-addressorder code. The instructions were arranged starting with thetwo addresses, followed by a function field, a counter field,and a vector bit.

The first address generally gave the data address, and thesecond indicated where the next instruction was coming from.Each address was five bits of track number and five of wordnumber on the track.

The function field, containing four bits, could specify thefollowing instructions: load to accumulator from memory; addmemory to accumulator; subtract memory from accumulator; ANDand OR operations; store accumulator to memory; multiply;rotate right using accumulator and register; input toaccumulator from paper tape; output from accumulator to papertape punch; conditional branch on top bit of accumulator;stop.

The machine was serial. Each revolution of the drum took 32`major' cycles, and inside each major cycle there were 32minor cycles or bit pulses. A simple data operation such as a32-bit addition took 32 minor cycles. However, the number ofcycles actually used for any operation was controlled by thecounter field, which specified a six-bit starting value. Theoperation would be halted as soon as the counter overflowed.

For most commands the `correct' starting value for the counterwas 32, but for shifts any value could be used sensibly.

The vector bit specified a vector operation, which meant thatthe command was repeated for the whole revolution of a drum,using each memory location in turn. This was chiefly usefulfor multiplication, where the command actually specified onestage of the Booth algorithm.

The complete multiplication was done by loading the multiplierinto the register, writing the multiplicand into everylocation on a given track, and executing a `vector multiply'.The operation would take exactly one drum revolution and leavea 64-bit product in the arithmetic unit.

As far as I am aware, the APE(X)C and MAC 1 machines were thefirst to incorporate the Booth multiplication algorithm, whichgives correct results when multiplying twos complement signednumbers.

Much of Booth's early work was in crystallography, andinvolved a great deal of calculation with desk calculators.Multiplication was done by adding the multiplicand repeatedlyfor each digit of the multiplier, and shifting the partialproduct one place after each sequence of additions.

A well known trick in those days was to speed upmultiplication by using subtraction as well as addition. Forexample, a string of nines in the multiplier could be handledby subtracting once, shifting along several times, and adding.

Booth formalised this observation and applied it to binarymultiplication, where it led to a remarkably simple rule:

The machines did not have a division instruction, althoughBooth was reserving a function code for this purpose. Divisionwas done by program, and took a substantial fraction of asecond.

Since the memory did not have random access characteristicsthere were obvious advantages in being able to placesuccessive instructions in the `best' place on the drum. Thisimplied that each instruction needed to carry the address ofits predecessor.

With the two address format there was no need for anunconditional jump. The conditional jump tested the top bit inthe accumulator, and used the data address instead of thesuccessor if this bit was set.

Testing for zero was clumsy. You had to test for zero-or-positive, then subtract one and test again. Then you had toadd the one back to recover the original value.

Using arrays was awkward without an index register. To accesssuccessive elements, you had to increment the address in theinstruction itself.

Programming tools were primitive. The main aid to programmerswas the coding sheet and the pencil. To code effectively youhad to know how long each instruction would take, and placeeach instruction at the best possible place: this was calledoptimum programming.

When the program was written it was coded (that is, each groupof five bits was translated into the corresponding baudotcharacter) and punched on a device originally intended fortelegrams. Then the tape was read into the computer using aloader which lived on track 0. More often, however, theprogram was keyed into the computer by hand because the tapereader was not working reliably.

Another aid to programming was the library: we had a numberinput routine which lived on track 24, decimal output on track26, and so on. The machine was innocent of any symbolicassemblers, but you could single-shot your program to find outwhere it was going wrong.

In terms of hardware, the machine must have been one of themost primitive ever constructed. However the technologyavailable for computing was equally antediluvian.

The transistor had only recently been invented, and was notreliable at high frequencies. No transistors were used in MAC 1.

Logic gates were made from germanium diodes. These componentsare notoriously heat-sensitive, and on warm days we had to aiman electric fan at the innards of the computer to make it workat all.

The registers in the cpu were Eccles-Jordan flip-flops whichused one double-triode valve per stage. A cascade of thesestages was able to shift a binary pattern without anyintermediate storage. Each flip-flop was linked to the nextone by a couple of small capacitors, and when all the stageswere forcibly switched to zero by a huge, very short `shift'pulse, each stage was then able to take on the state of itspredecessor.

I never understood how this worked, although the process hasvariously been described as `magic' and `stray capacitance'.In any event the system was extremely sensitive to the sizeand duration of the shift pulse, which had to be adjusted, itseemed, even to suit the weather.

Nowadays all shift registers are designed with two phases, sothat intermediate storage of the data is assured, but in thosedays valves cost the equivalent of £20 each and were extremelyunreliable, so the best design was the one which used thesmallest number of valves!

Over prolonged use, valves degrade in variable andunpredictable ways. Quite often a valve which failed in onecircuit would work in another. This led to a popular method offixing faults: take out all the valves and plug them back inat random!

An interesting feature of the machine was its drum store. Itconsisted of aluminium, coated with a layer of magnetic oxide.Flying heads had not been invented, and each of the 32 headswas attached to a solid brass pillar. Its exact spacing fromthe surface of the drum had to be adjusted by turning a screw.Since the correct spacing was only a few microns, this was adelicate operation.

We would set up the computer to write and read alternately,and examine the output of the head on a 'scope. At first therewould be nothing: then, as the head approached the surface,information would begin to come back. If the screw was turneda tiny fraction too far the head would crash with a loudclang, and strip the oxide from the surface of the drum. Thattrack would have to be abandoned until the drum was nextrecoated.

The drum also provided timing signals for the whole machine.Two special tracks had notches machined into them, and thesignals from the heads generated the major and minor cycles,respectively.

Track selection on the drum was done by a relay tree. Theywere the fastest relays you could buy at the time, and on agood day they could switch tracks in only six major cycles.The delay had to be taken account of in programming: if youtried to read a word from a different track with a `gap' ofless than six words, you would get all ones.

Sometimes a relay stuck, and some tracks were inaccessible. Ifnecessary we would reprogram the machine to avoid thosetracks.

We were always interested in getting the machine to runfaster. The overall speed was governed by the speed of thedrum, which was driven by a synchronous AC motor at 50revolutions per second. On one occasion I bought a second-handmotor-alternator set and installed it with a smaller pulley onthe alternator, so it generated current at 75Hz. This speededup the computer by 50%!

It is interesting to compare MAC 1 with present daytechnology. This talk was written on a 386SX PC. The tableshows how primitive MAC 1 was in comparison.

MAC 1PCratio
RAM8 bytes2 Mbyte250,000:1
Rotating memory4096 bytes40 Mbytes10,000:1
Speed50 ips12 mips250,000:1
Mean time between faults30 minutes5000 hours10,000:1
Mean instructions between faults100,0001014109:1

Looking back, what did we achieve? In terms of practicalcomputation, very little. All attempts at serious work werefrustrated by the frequent failures of the machine, and thelack of credence one could attach to its results.

As a research tool, however, Booth's machines were highlysuccessful. They were used to develop and demonstrate numerousprogramming techniques, and to educate a whole generation ofresearch students.

One feature of the work at Birkbeck puzzled me for years. Atthe time we were doing our best to use MAC 1, effective andreliable computers such as the Ferranti Pegasus were alreadyavailable. Why did we bother?

On reflection, it now seems clear that Booth's machines wereearly ancestors of the mini or PC. Pegasus was thesupercomputer of its day. One machine served a largeorganisation, was tended by a full time professional staff ofoperators and programmers, and cost about £50,000 toinstall (say £1 million in today's money). But thetotal component cost of the MAC 1 was only a few hundredpounds, and the machine could be assembled by a skilledtechnician in about six weeks.

Booth's aim was to build a machine which could be afforded bysmall companies, colleges and even schools. If the machine hadbeen more reliable it would have allowed a working knowledgeof computers to be spread earlier and more widely than in factoccurred.

This article is an edited version of the talk given by theauthor to the Society at the Science Museum on 28 November1991.


TopPreviousNext

The Origins of Packet Switching

Derek Barber


I remember Donald Davies at NPL coming into my office onemorning to discuss the future research programme. I think thatwould have been probably late 1965. His feeling, he said, wasthat data ought to be handled by a network rather in the waythat parcels or packets are handled in the postal system.

At least that's the way I remember it. Donald tells me hisrecollection was that the packet idea came a bit later, but Ithink he's wrong. I'm sure he did mention packet at that firstmeeting. At the time he initiated a small survey as to how thewordpacket would translate into most of the worldlanguages in order to judge its suitability. The result wasgenerally favourable except for Russia where it was already inuse as a data block in a link. But he decided that that wasn'tmuch of a constraint sopacket it was.

But let us start at the beginning. I started work in the PostOffice Engineering Department. I did the rounds in theworkshop with different lads and so on - jolly good groundwork. I actually worked on the London to Birmingham one inchcable for television and I remember the night when theBirmingham transmitter opened in November 1949. The line wentdown five minutes before the show was due to go live. At leastwe thought so - eventually it turned out the film had brokenin the telecine so they had to rewind it before they couldstart. The network was perfectly all right.

I eventually got a degree (part 1) in evening classes, thengot through the open experimental competition. I got put intoresearch branch and worked in the war room under the groundtesting contacts for nearly a year. Then I got put into RC4-1which was a marvellous division, working on pulse and bartesting of television links. Then I got my full degree,applied for the position of open scientific officer, and gotsent down to NPL.

The Director was Bullard and at that time when I went for theinterview about March 1954 there were two sections: ControlMechanisms in Metrology, and Electronics in Maths Division.These were run by Dick Tizard and Frank Colebrook.

At the interview I sat down in the large conference room atthe big oval table, and they asked where I had been working. Isaid Dollis Hill and I had been working on vestigial sidebandtransmission. Bullard said ``Just a minute, that's a technicalterm. You must explain it.'' Colebrook leant forward andsaid ``Perhaps I can handle this one, Director.'' And hechatted for about 10 minutes about modulation methods and thelike. Then Bullard said ``Christ, I'm due up in town in about10 minutes. I'm late already. Nobody else has got anyquestions have they?''. And that was it!

NPL formed a division with Dick Tizard as Superintendent, andI got working on guided weapons data processing. Tizard thenwent to LSE and Ted Newman became acting Superintendent. Iremember Ted ringing me up one day in 1958 and saying ``I'dlike you to represent us on the IEE Measurement and ControlSection.'' I said ``Oh, I don't think I can do that.'' ``Ohyes'' he said ``I'm confident you can.'' So in the end I didit. Yet it was some years afterwards I realised he was justtrying to find a mug to take on the job.

We made a digital plotting table. I built a very high speedanalogue amplifier-multiplier. Then we started work on analcohol still as part of our work on adaptive control underPercy Hammond. I got my PSO promotion then. That was in 1963.

I meant to say something about that plotting table. I got sometransistors in from BTH to build the plotting table. It was aninteresting design because it had a binary point and about 10places of binary fractions. I don't think we ever had asituation where all of the transistors were working at onetime. That created a problem of accuracy as you can imagine.

I went to the States in 1963 and spent four weeks there, whichopened my eyes. I went to MIT and saw Project MAC; I saw thePDP-1, and the Sketchpad work that Ivan Sutherland was doing.

As a result of the distillation column instrumentation we gotworking on a data processing system with standard interfacesand data transmission, and that paved the way for the network.

In 1965 Dunworth was Acting Director, Uttley was still runningAutonomics Division, and the SPSOs were Percy Hammond, TedNewman and Donald Davies. I was under Percy doing adaptivecontrol and the NPL Standard Interface.

I became chairman of BSI DPE 10, which is how I found outabout politics in standards. Eventually at a Berlin meeting in1969 the BS 4421 British Standard Interface was being offeredas an international standard. There was a dead heat and thechairman gave his casting vote against it.

By October 1970 I gave up the BSI work because other thingswere beginning to build up like the Data CommunicationNetwork. But this `packets' meeting with Donald Davies in midto late 1965 came about partly because of this work.

Donald himself went to IFIP 65 in the States and he came backfrom that with the view that you ought to handle data in thesame way that you handle data in a time-share machine, withtime slicing, source allocation and so on. I think that wasbasically where the thoughts came from. On the 18th MarchDonald gave a lecture at NPL but there are one or two papersbefore that which have never been published.

So Donald was going public then on packet ideas, and we formedAutonomics project No 6 on Data Communication early in 1967.So that's the background.

Soon after that Uttley left and Donald became superintendent(I think). Percy was one of the SPSOs and Ted was the otherone. I got the SPSO in 1969 and I picked up the other half ofthe division. For a while I thought we had a marvellous team,the three of us used to meet on Monday mornings in Donald'soffice and really kick ideas around. I look back upon thattime as a time when things were just right.

Out of that came the data communications work. RogerScantlebury took over that. Keith Bartlett was with thehardware team, and for software we had Peter Wilkinson, JohnLaws and Carol Walsh. Pat Woodroffe stayed with Alan Daviesfor a while on the BS 4421 and was instrumental in the bigdisplay. I think Brian Wichman wrote the cross-compiler forthe KDF9. Anyway we had a Mk 1 and a Mk 2 system. The Mk 2 wasan altogether better design.

We had a Modular 1 to run Scrapbook. Maths Division had theKDF9. There was a PDP-11 front end on which ran the Editservice which was eventually made available on the network.

At Donald's talk in March there were over 100 people including18 from the Post Office. I mention them because they featuredquite a lot in what happened afterwards as far as Roger and Iwere concerned. There had been a paper written by someone fromthe Rand Corporation which, in a sense, foreshadowed packetswitching in a way for speech networks and voice networks butnobody knew anything about it and certainly it didn't enterinto our thinking at all. Eventually Donald wrote an internalpaper which was really his lecture polished up.

Then the Real Time Club was formed early in 1967. Rex Merrickand Stan Gill were very important then, and Donald of course.In the meantime Roger went to the ACM Symposium on operatingsystems at Gothenburg. There wasn't a conference aboutnetworking; of course the subject hardly existed, so theoperating system seemed to be - since it was timesharing -the right place to go.

Anyway Roger went and gave a paper called ``DigitalCommunication Networks for Computers giving a Rapid Responseat Remote Terminals.'' Larry Roberts had extended the conceptof a support graphics processor to the idea of a network, andhe was then talking about multiple computer networks andinter-computer communication. Roger actually convinced Larrythat what he was talking about was all wrong and that the waythat NPL were proposing to do it was right. I've got somenotes that say that first Larry was sceptical but several ofthe others there sided with Roger and eventually Larry wasoverwhelmed by the numbers. That actually gave birth toArpanet because Larry joined soon after and became responsiblefor it.

Events happen and it's difficult to get them chronologicallyright. But certainly by July 1968 the Real Time Club hadorganised this great big event in the Royal Festival Hall. Weworked jolly hard to produce various bits of kit and so on andwe were able to put on this show (either simulated or usingreal networks). This provoked a debate between Stan Gill andBill Merriman, because the Post Office at the top level wereall telephone people and it must have been very hard to takeon board something very new.

In autumn 1969 there was the Mintech Network proposal. Rogerand Donald came round to my house and spent an eveningdiscussing this and putting ideas together.

In November 1969 I went to the USA and I saw the first threeArpanet nodes that were ready to be shipped out to the WestCoast. I remember going round with a set of slides and givinga talk. When I came back the Director was a bit worried incase I'd been giving away all our ideas. Donald's response wasthat we had to tell people about it to get anything to happen.

Two and a half hours of KDF9 run time accounted for half asecond of run time on the network - an amazing ratio really.It just shows how slow the KDF9 was.

Then came the Isorhythmic: Donald was doing work oncontrolling congestion there. Then Costas Solomonides came tojoin us and did a lot of work on hierarchical networks inrelation with Logica.

Then there was the Mark 2 NPL network software. The first lothad been written in assembler or something. Peter Wilkinsonworked for five months and nothing really appeared exceptstrange transition diagrams. Eventually the software gotwritten and they loaded it all up and there were two bugs thatthey cleared in a day; and from then on it just ran.

Eventually it was all rewritten in this PL516. In doing thatwe had Ian Dewis who came and joined us and later went toBritish Steel where he got involved in their network. ThenAlan Gardner was seconded from the Post Office to us, so webegan to get people coming from outside. The reputation hadgot around of this interesting work going on.

There were also the attached services. Scrapbook I'vementioned, and Edit, involving Tony Hillman and RogerSchofield. Then we had a File Store built by people from CAP.Then we had a Gateway to EPSS (the timing is a bit uncertain).Moving on towards EIN now, John Laws was responsible for theEIN management centre; he's at RSRE now.

Just a few words about international things that were goingon. I've already mentioned Arpanet and Larry Roberts. Telenetwas basically a company to exploit Arpanet and Larry Robertswas its president. Eventually that was taken over by GTN. Thatwas a recognised carrier so Telenet became respectable, andtherefore a recognised public operator.

I mention Peter Kirstein at UCL with the gateway to Arpanetbecause over a period of time that's been quite significant.Certainly we used to use the Arpanet message service -electronic mail - through that gateway quite often. I did evenwhen I was in the Alvey Directorate.

In Canada Bill Morgan built Datapac about this time. By now Iwas running EIN and I got involved a lot because of that. Ialso got involved in CCITT, SET and a whole manner of things.

I actually set up the first meeting between John Wedlake ofthe British Post Office and Rene Dupre of the French PTT whichled to X25. There was a problem about virtual calls in EIN, soI called this meeting and that actually did in the end lead toX25.

A philosophic point about networks - if you make it properlydynamic resource sharing and it all runs much faster than anyuser, there's a high probability the user gets what he wants.But the PTTs are not happy about that, partly because of thebackground (the fact that the telephone network provides aconnection and so on), and partly because they've got anobligation to provide a guaranteed service of some kind. Ifthey give you a circuit and you succeed in dialling it up thenyou expect to get 4 KHz; you don't expect 2 1/2. Sotheir philosophy was very much that you had to simulate an endto end connection.

Rene Dupre was a man who believed in allocation of resources;so in the French RCP network he had buffers allocated at everyswitch per call. Arpanet had buffers allocated at the ends butin the French network they didn't even allocate it at theends; they basically did it in what amounts to host computers.

But Dupre had got a guilt fixation about this and there werethese meetings because the PTTs had to get agreement on X25.There's an interesting difference between PTTs and computerpeople when it comes to standards: the PTTs sell services, andyou can't sell services if you don't have standards, which iswhy the bottom three layers of the ISO pattern and the toplayers are hardly released yet.

But Dupre went around telling everybody that if you don'tbuild it our way we won't get an agreement. And if we don'tget an agreement, none of us will be in business, because wewon't be able to sell data networks: that's roughly how itwent.

The Cyclades network led to Transpac so the French PTT in theend got off the ground with a French network. Cezar who didthat was involved in EIN with Logica.

The Spanish, dark horses, were the first people to have apublic network. They'd got a bank network which they craftilyturned into a public network overnight, and beat everybody tothe post.

This article is an edited version of the talk given by theauthor to the Society at the Science Museum on 30 April 1992.


TopPreviousNext

Altair and After: the Original PC Revolution

Robin Shirley


The almost accidental commissioning of the Altair 8800 microcomputer kit in 1975 to accompany a magazine series proved to be the catalyst that launched a snowballing personal computer movement based on machines that adopted its 100-way bus as a de facto standard.

This account chronicles the movement's explosive growth and examines the factors that fuelled it, the people and companies that were involved, and the populist, libertarian political and social ethos that it sought to promote.

Origins

What provided the impetus for the personal computer movement?Not the established computer industry, at least not directly.

There was a growing substratum of young, smart programmers andusers, mainly in universities and colleges but also fromindustry and business, who had for years nursed a love-haterelationship with the crude, inflexible mainframe computersand hostile, autistic systems software on which their jobsran, or, just as often, failed to run.

Of course there were also smaller and neater computers, butonly the fortunate few got their hands on them and could feelthey were masters of their own fate. These would usually besmall, dedicated minis like PDP-8s, PDP-11s or maybe Novas, ina science or engineering lab.

So the primary issue was freedom from interference andfrustration. The other main one was power.

The mainframe computer was an obvious symbol and concentratorof corporate and official power. It tended to reinforce allthe tendencies to centralism that don't need any encouragementin any sizeable organisation. On the other hand, it couldclearly also be a tremendously powerful tool if exploitedeffectively.

In those days a lot of mileage was got out of `Grosch's Law',a rule you don't hear much of now, which proposed that thepower of a computer system increased as the square of itscost. So computers seemed to mean more and more power for thebig battalions, who arguably already had as much power as wasgood for them.

Reactions to this situation varied. The general public tendedto take a broadly luddite position - computers are essentiallyanti-people and should be curbed - a view that still has itsadherents today.

Others felt in their bones that there had to be a better way...

The story of the rise of the microprocessor and large-scaleintegrated circuitry is a familiar one, so I won't dwell onit. In essence, in 1971 a small team of mostly ex-Fairchildpeople led by Ted Hoff produced the first commercialmicroprocessor, a 4-bit unit - the Intel 4004 - meant toprovide the driving logic for a Datapoint intelligentterminal, but pursued for other uses when the original orderfell through.

The 4004 was doubled up to give the 8008, a primitive 8-bitprocessor which lacked, for example, any direct memoryaddressing instructions, and this was improved and refined in1973 to give the Intel 8080.

The 8080A, an NMOS chip clocked at 2MHz, was the processorused in the Altair and its first generation successors andgave them their characteristic architecture. It had an 8-bitdata bus, a 16-bit address bus (and hence 64Kbyte direct-addressing range), a separate 8-bit I/O address space thatdefined 256 `ports', a mixture of 8- and 16-bit registers, anda reasonably adequate order code of 78 instructions. Intelalso provided a family of support chips that made itrelatively easy to produce a complete 8080-based system.

Z80

It was followed up a couple of years later by an enhanced andextended version, the 8085, but by then most of the S-100microcomputer interest had shifted to the Zilog Z80, producedin 1976 by a small group of engineers who had, after thefashion of Silicon Valley, split off from Intel to go theirown way.

The Z80 ran at 2.5MHz and had an extended order code of 158instructions (400 or so if you chose to count the differentbit-manipulation orders separately), a single +5v power railand a single-phase clock, so it was an altogether more elegantdevice than the 8080.

Best of all, its order code was an almost exact superset ofthe 8080's, so that a Z80 could execute nearly all 8080programs unchanged. Its faster 4MHz Z80A version offered realpower, and featured in most of the second generation S-100systems. Eventually still faster versions appeared, the 6MHzZ80B and 8MHz Z80H, but by then other architectures had movedto the fore.

Meanwhile, another line of development and spin-offs led tothe Motorola 6800, used in the South West Technical Productsmachines and in the Altair 680, and the MOS Technology 6502(1MHz and 2MHZ) that powered other important (non-S100)machines like the Apple II, Commodore Pet and BBC micro.

Altair

In late 1974, a series of computer construction articles on anIntel 8008-based Mark 8 microcomputer appeared inRadio-Electronics magazine. This was the first time a computer hadbeen put within the reach of anyone but a large company and itaroused enormous interest.

Not wanting to be outdone, Les Solomon, the editor ofPopularElectronics, commissioned Ed Roberts, the president of a smallcompany called MITS in Albuquerque, New Mexico, to come upwith a similar computer kit. Roberts decided to base it onIntel's new 8080A chip, and so the Altair 8800 was born.

The first Altair article appeared in the January 1975 issue ofPopular Electronics. It had a bus based on a 100-way edgeconnector on which MITS had got a good surplus deal, and wascalled the Altair bus.

The original Altair was essentially a prototype and had manyshortcomings, from a feeble power supply to somewhat flaky bustiming, and was replaced in due course by a revised productionversion - the Altair 8800b - which was somewhat better.

Meanwhile, improved clones started to appear, so that byAugust 1976Doctor Dobbs' Journal of Computer Callisthenicsand Orthodontics (DDJ) was calling it the Altair/IMSAI or`Hobbyist Standard' bus.

Roger Mellen of the then small company Cromemco proposed thename `Standard 100' bus, or S-100 for short, because it had100 lines, and this was the name that stuck. In due course,some five years later, a cleaned up 8/16-bit version becameofficially standardised as the IEEE 696 bus.

The S-100 bus had most of the faults and virtues of unplannedindustry standards. It had been designed in a hurry, was notoptimised against crosstalk, and leant rather too much on thepeculiarities of a particular processor (the 8080). However,it could be made to work reliably and was good enough. Itquickly became the de facto standard.

Floppy disc systems were appearing too - at first rather bulkyones based on single-sided single-density (SSSD) 8-inchdrives, holding a nominal 250Kb (kilobytes) per diskette.Soon, however, these were supplemented and eventuallysupplanted by a new 5.25-inch Shugart SA400 mini-floppyformat. In SSSD form these stored about 175Kb. For severalyears the 5.25-inch drives continued to be scorned by 8-inchdisc users in the kind of shallow partisanship that oftenafflicts technical enthusiasts.

Among the vendors of add-on disc systems was a Californiancompany called North Star, whose blue-painted drive cabinetsbecame a common sight. Their popularity was based onreliability, low price and (especially) the fact that asomewhat spartan but efficient operating system (North StarDOS), accompanied by an excellent Basic (using BCD arithmeticand hence good at avoiding roundoff in financialcalculations), was bundled free with their disc controllers.Floppy-disc North Star DOS was stripped for speed, with fewconcessions for convenience, and easily out-performed CP/M andindeed many hard disc systems too.

They also made a hardware floating point board designed aroundthe 74LS181 4-bit ALU, which too was supported by versions ofNorth Star Basic. Just as would occur a decade later with add-on PC board makers like AST and Everex, North Star was soon touse this experience as a springboard into producing completesystems.

Second generation

The period 1976-77 saw the arrival of a host of high qualitysecond-generation designs. This was the golden age of S-100systems, in which appeared classics like the Cromemco Z2,North Star Horizon, Vector GraphicsMZ and Ithaca Audio (laterIthaca InterSystems) DPS-1.

The design of the classic small business or scientificmicrocomputer system crystallised as a 4MHz Z80A-based S-100machine in a 19-inch cabinet with twin 5.25-inch floppydrives, running under CP/M.

Horizons in particular were very widely used in the UK, thoughoutsold by Vector Graphics and (more marginally) Cromemco inthe USA. The Horizon seems to be remembered with affection byall who used it, as an elegant, rugged and extremely stableand long-lived design. It has become part of the industryfolklore that the engineer's console on the original Cray 1supercomputer was in fact a rack-mounted Horizon.

The Horizon motherboard design, with its input/outputcircuitry \linebreak mounted on a rearward extension of thePCB, was notable for its far-sightedness and provision forevery variant of asynchronous and synchronous I/O or interruptservicing that might be needed, which was specially useful forOEM applications where (like other S-100 machines) it couldeasily be built into a standard 19-inch equipment rack. Thissort of use was quite common and accounts for many of themachines still active today.

Looking back over 10 years or so of servicing Horizons, I'mstill impressed with how few design faults they had - I canreally only think of two, both relatively minor and arisingfrom an apparent blind spot on the part of its designer, whotended to disregard the long-term consequences of whathappened to waste heat once he'd dispatched it to a heat sink.

CP/M

Just as significant to the success of the S-100 microcomputeras their hardware standardisation was the standard softwareenvironment offered by their CP/M operating system.

In 1973, Gary Kildall, a young software consultant at Intel,was fed up with trying to develop the PL/M programminglanguage for microprocessor development systems on paper tapeusing an ASR 33 teletype, and so begged an ex-10,000-hour-testfloppy drive with worn out bearings from the marketing managerat Shugart Associates, a few miles up the road.

However his attempts at interfacing proved abortive, and itwas not until late 1974 that a colleague, John Torode, took aninterest in his problem and completed a wire-wrap controllerto interface the drive to Gary's Intellec-8 developmentsystem.

Meanwhile Gary had put together a primitive disc operatingsystem for the drive, and in due course (according to Gary)the paper tape was loaded and, to their amazement, the drivewent through its initialisation and printed out the systemprompt on the first try (legend doesn't record whether it alsodid so on the second try).

Gary named the operating system CP/M, which in early accountsstood for Console Processor and Monitor, but later becamedignified as Control Program/Microprocessors. It was ported totwo other (non S-100) microcomputer systems during 1975, andGary continued to work on it in his spare time, producing aneditor, assembler and debugger - ED, ASM and DDT (the styleand nomenclature of CP/M were heavily influenced by DECoperating systems).

In 1976, IMSAI shipped a number of floppy disc systems withthe promise that an operating system would follow, but as yetnone existed! Glenn Ewing, who was then consulting for IMSAI,approached Gary Kildall to see if he would adapt CP/M to fillthe bill. Gary agreed, but so as not to have to change CP/Magain as a whole to fit another computer system, he separatedout the hardware-dependent parts into a sub-module called theBIOS (Basic Input/Output System), so that any competentprogrammer could then do the job.

This version, CP/M 1.3, was distributed by IMSAI withmodifications as IMDOS, and Gary was also persuaded by JimWarren, the editor of DDJ, to put it on the open market at\$70 a copy. Gary did this rather against his betterjudgement, since contrary to Jim Warren's assurances he waspretty sure that unlicensed copies would immediately becomecirculated.

However, to the amazement of sceptics, it was treated as apoint of honour among the delighted S-100 users never to passon their copies of CP/M, and the rip-off factor turned out tobe practically nil. Gary Kildall formed a new company calledDigital Research to support CP/M, and quickly became amillionaire.

CP/M was updated to versions 1.4 and 2.0, and then stabilisedas CP/M 2.2 for several years. Late in the day, a version 3.0for bank-switching systems with more than 64Kb of memory wasproduced, but by then 8-bit systems were on their way out,and, apart from a subterranean existence in the Amstrad PCW,it saw little action.

The standard software environment provided by CP/M proved tobe the final catalyst that was needed. From then on, the mainefforts of numerous independent microcomputer software writerscould be directed into providing packages to run under CP/M,with the confidence that this would open up the entire marketof S-100 systems for their products. The era of what was tobecome known as `shrink-wrapped software' had arrived.

And what software! An explosion of very high quality programsfollowed, often written by some of the top big-machineprogrammers in their spare time, and sold at prices that wereminute compared with their counterparts on minis andmainframes.

Whole new categories of software sprang up in this fertileenvironment. Interactive screen editors like Electric Penciland Wordmaster led to MicroPro's WordStar, the first fullmicrocomputer word processing program, which outperformed mostdedicated word processors at a fraction of the cost, and letmicrocomputers start recovering their investment by doinguseful work from day one, something that seldom if everhappened with big machines.

WordStar sold a lot of microcomputers, and even more so didVisiCalc and SuperCalc, the first spreadsheet programs.Microcomputer programmers were now no longer justrecapitulating minicomputer and mainframe softwaredevelopment, but breaking completely new ground.

The crude but promising database program Vulcan wastransformed by Ashton-Tate into dBase II (so called to inspireconfidence in its maturity - there never had been a dBase I).

IBM

In early 1980, elements within IBM started to sit up and takenotice of the booming personal computer industry. The way ithappened was that a year or two earlier, IBM had established acluster of so-called Independent Business Units (IBUs in IBM-speak) with permission to act semi-independently, largelyunfettered by the IBM bureaucracy, and a brief to break intonew markets. Among these was the Entry Systems (PersonalComputer) Unit, tucked safely away from the company'smainstream in Boca Raton, Florida.

In July 1980, Philip D Estridge, a divisional vice-president,was put in charge of a team of 12 and given a year to create acompetitive personal computer. About 13 months later, inAugust/September 1981, shipments of the IBM PC started.

At this point it is helpful to remember that a number ofobvious contenders among the large computer and electronicscompanies (Texas Instruments, DEC, Intel and Motorola, forexample) had already made their move and failed dismally,generally through what they thought of at the time as doingthings professionally, but what in hindsight looks more liketypical big-company habits of inertia and over-pricing. DEC,for example, knocked the prospects of its Rainbow PC on thehead in a classic bit of marketing over-reach by deliberatelynot providing a format program, so that users would have tobuy all their floppy discs from DEC.

In consequence, it had become an article of faith in the PCmovement that the big corporations stood as little chance ofgetting there as the dinosaurs had had of supplanting mammals.The bigger the less likely, it was assumed, so the longestodds of all would be against IBM.

So what went right at IBM? The big difference at Boca Ratonwas that the design team had a reasonably free hand and,crucially, that it included a number of computer hobbyists andhackers (in the original and proper sense) - people whoalready owned and were familiar with the existing personalcomputers - and that the project was allowed to adopt theiropen systems philosophy and reflect their user experience.

Except in one important respect (the 8088 with its 20-bitaddress space, which according to Peter Norton was urged onthem by Bill Gates at Microsoft), the original specificationof the IBM PC was intentionally a combination of the better(or at least well-tried) practices from the various 8-bitmachines. It was perceived as such at the time, and this wasseen by both users and reviewers as a virtue, since they wereinclined to blow a raspberry at large computer companies whodisdained to follow the custom and practice of existing users.

The base model had only 16Kb of RAM, plus 40Kb of ROMincluding an embedded variant of MBASIC, five expansion boardslots and a cassette interface (!) for external storage. TheRAM was only expandable to 64Kb on the motherboard, but to256Kb using memory expansion boards - though not quite so farin practice, since 64Kb boards were the largest thenavailable, and these had to compete with other boards for thefive slots available, one of which would be pre-empted by adisplay adapter, and another for a floppy disc controller (andquite probably a third for a serial port).

The 5.25-inch floppy drives were at first SSSD and providedonly 160Kb per disc, but IBM subsequently switched to double-sided double-density (DSDD) units holding 360Kb, mostly TandonTM100-2's, as used, for example, in North Star Horizons. Thestandards of internal construction matched (but mostly didn'texceed) the best practice of S-100 manufacturers like NorthStar, Cromemco and Godbout CompuPro. This was enough to put itwell ahead of most minicomputer standards.

The operating system PCDOS was a CP/M clone for 8086processors bought in from Seattle Computer Products andhastily converted by Microsoft. Version 1.0 more or lessworked, but was stark in the extreme. Moreover its commandformats had been changed in the direction of Unix, just enoughto make it tiresomely different from the quaint but familiarnomenclature of CP/M (which had in turn been inherited fromDEC operating systems).

At this stage, Boca Raton was still undecided whether the PCwould primarily be a home computer or a business machine, sothey hedged their bets. The two video cards offered - the CGA(colour/graphics monitor adapter) for games and the MDA(monochrome display adapter) for professional use - reflectthis ambivalence. Also, as we've seen, a cassette interfacewas built in as standard, and the 320x200 colour graphics and40 column text modes of the CGA adapter were chosen to matchthe meagre bandwidth of NTSC domestic TV sets.

As one might expect, the promotion and marketing of the IBM PCwas impressive and the documentation superb. The TechnicalReference Manual, which opened up the architecture to add-onboard manufacturers, won special praise. Another essentialbreak with IBM tradition, based on a study of Apple'ssuccessful methods, was the use of independent dealers andfranchised networks like Computerland, as well as directcorporate sales.

The price structure was also carefully pitched to undercutslightly its similarly configured rivals, at least in the US.Over here the time-honoured pound for dollar equation was notresisted, so that, like the Apple II before it, it didn't geta look in as a home computer against the smaller but similarlyconfigured 48Kb Spectrum at nearly a tenth the price.

In the event, the most important aspect turned out to be theway software vendors seized with both hands the opportunity todistribute on a standard disc format and, especially, to writefor a standard (more accurately, two standards) of screenaddressing - and the rest is history.

In little more than a year, sales of the IBM PC far exceededthose of all the S-100 manufacturers. The tone of themicrocomputer world was increasingly set by the new cohorts ofPC business users, and the personal computer revolution, asoriginally envisaged, could be said to have died of success.

From 1986 on, rocketing price-performance on the PC/AT-clonefront quietly buried further development of S-100 systems.

This article is an edited and abridged version of the talkgiven by the author to the Society at the Science Museum on 27February 1992. Robin Shirley is Chairman of the S-100 BusWorking Party.


TopPreviousNext

Letters to the Editor


Dear Mr Enticknap,

May I please draw your attention to a small but quitesignificant error in Volume 1 Number 4 ofResurrection. Itappears on page 21 (line 3 onwards) in the report of TomKilburn's talk to the Society about the Mark I computer. Hismemory is clearly at fault regarding the position andcontribution of Sir Ben Lockspeiser.

Sir Ben was never associated with Ferranti Limited except as acustomer. In 1948 he was Chief Scientist at the Ministry ofSupply and very concerned that some of their developmentprogrammes were being held up by a lack of adequate computingfacilities in the UK. With this in mind he visited ManchesterUniversity accompanied by Eric Grundy, the Director ofFerranti Limited responsible for Instrument Department which,as Kilburn said, was helping the University's computerproject. Sir Ben wanted to assess the possibility that thecomputer project could be of practical assistance to him. Hewas favourably impressed and being a decisive and energeticman, promptly wrote to Eric Grundy, a letter intended to getthings moving.

An ancient photocopy of a contemporary typed copy of thatletter is enclosed herewith. Unfortunately a search carriedout some years ago failed to unearth the original in theFerranti files. However, that letter, dated 26th October 1948,effectively brought Ferranti Limited into the computerbusiness and played a seminal part in the development ofcomputers and computing in the UK.

The final sentence in Sir Ben's letter became an oft quotedphrase in the company.

Yours sincerely,

MH Johnson
Oxford
11 September 1992

Editor's note: the text of the copy letter referred to readsas follows:

Dear Mr Grundy,

I saw Mr Barton yesterday morning and told him of thearrangements I made with you at Manchester University. I haveinstructed him to get in touch with your firm and draft andissue a suitable contract to cover these arrangements. You maytake this letter as authority to proceed on the lines wediscussed, namely, to construct an electronic calculatingmachine to the instructions of Professor FC Williams.

I am glad we were able to meet with Professor Williams as Ibelieve that the making of electronic calculating machineswill become a matter of great value and importance.

Please let me know if you meet with any difficulties.

Yours sincerely,

B Lockspeiser


Dear Mr Enticknap,

I have just read with great interest the article about theearly days of Algol inResurrection. It brought back manymemories of my university days and early industrial career.

I began programming in 1968 in the sixth form at school,sending Algol tapes to the 903 at Medway Polytechnic. When Iwent to Leeds University in 1970 to readMathematics andComputational Science, the teaching machine for first yearundergraduates was also a 903. It was a 16K machine that hadcost the university £22,000 in 1967!

From that machine we moved to a KDF9 and first experienced thejoy of having filestore and on-line access! The latter wasprovided by the Eldon 2 operating system developed by DaveHoldsworth and others. I recall that there were two Algolsystems (it was rumoured that two English Electric teams atKidsgrove and Whetstone respectively had developed them, eachin ignorance of the other). The Kidsgrove variety was acompiler, while Whetstone was interpretive.

After graduation I was absorbed into industry and converted toCobol; but I was reunited briefly with Algol in the mid-1970sat the Jonas Woodhead Group who had a DECsystem-10 and wereone of only two customers (the other being Whessoe inDarlington) who used it as a commercial and not a scientificmachine. I recall computing Ackerman's Function in Algol todemonstrate recursion to a day release student.

Best wishes,

Yours sincerely,

Tony Peach
Telford, Shropshire
21 September 1992


TopPreviousNext

Working Party Reports


Elliott 401

Chris Burton, Chairman

We formed the Elliott 401 working party last autumn, once theScience Museum's Computer Conservation and Archiving Projectbecame an authorised funded project. Our objective isspecifically to conserve and restore the Museum's 401.

This historic computer, an ancestor of Pegasus, is a uniqueone-off machine built by Elliott Bros in 1952 to provepackaged construction techniques and use of magnetostrictivestorage in a complete computer system. It was demonstrated atthe Physical Society Exhibition in April 1953 and subsequentlywas installed at Cambridge University where it was evaluatedand modified by Christopher Strachey.

Later it was installed and used at the Rothamsted AgriculturalResearch Station until 1965, when it was donated to theScience Museum. So it has been in store for nearly 30 years.

Our work has two aspects - conservation and restoration. Theformer is concerned with the careful surveying of all theequipment, cleaning it, repairing damage to insulation,metalwork, paint finishes and so on, and generally bringingthe machine to a stable and preservable state, as a normalmuseum artefact.

The restoration aspect will be concerned with making themachine work again functionally. The Working Party has beenset up with three kinds of members - Museum staff with theknowledge and resources to do the conservation work, a smallnumber of CCS volunteers with the experience and resources totackle the restoration, and, importantly, the survivingoriginal development team members, who act as consultants andsources of know-how.

The project is high profile from the Museum's point of view,and care is being taken to set a high standard of proceduresin the spirit of the CCS aims to use voluntary expertise inconjunction with formal curatorial practice. A detailed planand list of tasks is evolving, which will probably lead to anoperational machine in about two years, depending on availableresources.

We have held four formal Working Party meetings, which are themechanism to get agreement on what to do and how to do it.These will probably become monthly. In between we have hadoccasional days of preparatory investigation work.

But the main work has been the excellent conservation progresson the major units of the system. This is likely to take afurther nine months. Already the base plinth and part of thetop ducting has been completed, and the site at the end of theOld Canteen is beginning to take shape. Because we cannot useany of the units until they have been conserved, we haveadopted a strategy of gradual re-commissioning using temporarysub-systems (particularly the power supply system) in order toget some restoration work going in parallel with conservation.

A significant number of original drawings exist in the ScienceMuseum Library, which have been copied, but sadly some keydocuments such as the `current' logic diagrams are missing. Wewill have to reconstruct such information by examining andrecording the back-wiring.

We intend to attempt to rescue any information which may stillbe on tracks on the drum. We have not obtained anycontemporary program tapes yet, but it will be some timebefore we will be in a position to consider running programs,though we are considering the desirability of a simulator.

So, a good start to an ambitious long-term project, thanks tothe skill and enthusiasm of everyone involved.

Elliott 803

John Sinclair, Chairman

For some time now the processor has been suffering from anintermittent temperature-dependent store fault. It has taken awhile to locate the problem, but the fault has now been found,and I am waiting for a replacement store read amplifier fromthe warehouse at Hayes.

The reliability of the film system has improved enormouslyover the past six months. It now works each time it isswitched on, whereas previously new faults developed almostdaily.

Readers may be interested in a statistical analysis of thefaults we have encountered since the machine started runningin October 1990.

The paper tape station has had 20 component failures, and wehave also found two logic design faults.

The film system has had 36 component failures. We have alsofound one original wiring fault and three connections that hadnever been soldered. (It is astonishing that these connectionsnonetheless worked perfectly throughout the machine'soperational lifetime - it has taken 30 years to discoverthem!)

The central processor has had six component failures. Herealso we found one connection that had never been soldered.

The high incidence of faults on the paper tape station andfilm system are due to the type of logic used, namely theMinilog potted logic element. They have proved to be much moreunreliable than the logic elements used in the processor. Thismay be due to the fact that the transistors in the Minilogelement are surrounded by a potting compound.

Our 803 emulator is now almost complete, as reportedelsewhere. I have modified the software and hardware of a Z80processor board, normally used to monitor telephone calls on acorporate switchboard, so that it now reads the 5-bit parallelcharacter signals transmitted to the Creed teleprinter that isin the 803's paper tape station.

The Z80 converts the signals into a serial character datastream suitable for connection to the serial comms port of aPC. This facility allows the 803 to output data (normallycopies of paper tapes) into a PC disc file for use by theemulator, and also provides an alternative means of paper tapeduplication or archiving.

DEC

Adrian Johnstone, Chairman

We have organised our space within the old canteen,transferring some of our PDP-8 equipment into a store in theold School of Sculpture buildings which are nearby. This hasmade space for our PDP-11/20 and 11/34 systems which are beingshipped up to the Museum from RAF Wroughton near Swindon.

The PDP-12 had has one tantrum, during which wisps of smokeappeared. The source of this fault has never been traced, butsince it has not recurred we have decided to leave well alone.

The Open Day was a success, with the PDP-12 and a variety ofPDP-8s entertaining our visitors.

Software and Emulators

Tony Sale, Chairman

Members of the working party have continued their work ondeveloping emulators since the last issue. The activity isdeveloping quite an impetus, though it is proceeding throughthe efforts of individuals rather than via formal meetings.

I have now acquired a 486-based personal computer, and am inthe process of transferring the code from my previous Amiga,so as to facilitate further development of the animationmastering system.

Peter Onion's 803 emulator is now almost complete. We havereceived a good response from members who have sent us discsso that they can receive a copy. These discs will be sent outshortly.

Work has started on developing other emulators. One of themost interesting is a third year degree project beingundertaken by Neil Mitchell of King's College, London: he isdeveloping an emulator for the Ferranti Mercury.

We are keen to recruit more people to this activity. Some maybe deterred through not knowing how to set about it, so we areconsidering holding an evening meeting in the late spring orearly summer to discuss the best ways of proceeding. Thiswould allow Chris Burton and Peter Onion will talk about theproblems they have encountered (and surmounted) in developingtheir emulators, and would provide would-be emulator designerswith an opportunity for informal discussions about theirambitions and difficulties.

Pegasus

John Cooper, Chairman

We entered the Pegasus restoration project for the BCSTechnical Award last year. During July the BCS AssessmentPanel came to see the results of our work, and interviewedsome of the working party. We also provided them with ourdocumentation of the progress of the project.

During the Evening Reception following the Open Day, it wasannounced that the BCS had given the Pegasus project aCommendation. It was the first time the Society had done this- they felt it was more appropriate than a Technical Award.Subsequently, Ewart Willey handed a plaque to commemorate theachievement to the Director of the Science Museum, Dr Cossons.

I'd like to take this opportunity to thank everyone who hastaken part in the project, especially those who attended the Assessment Panel meeting.

The machine continues to operate satisfactorily. We are now able to achieve operation on 10% margins on three voltages (-150, 200 and 300).

We have started to repair the broken packages that accumulated during the restoration period. It has been quite stimulating to find how recollections of the old technology came flooding back during this work.

There is now a good chance that the Pegasus will be put on public display in the near future.

S-100 bus

Robin Shirley, Chairman

The main event since the last report has been the donation by Longfield School, Kent, of a complete working Altair 8800b microcomputer, plus parts and spares from several others. I have described the historical significance of this machine in my article `Altair and After' (see page 23).

As well as system units and spares, the donation includes 8 inch floppy drive units, hard disc controllers and a hard disc unit (all of which are in separate external cabinets). Most of this equipment belongs to the less interesting (but more usable) second generation 8800b series, built after the original Altair manufacturer MITS was bought up by Pertec.

The one exception is a floppy drive unit of original 1975 MITS manufacture, built more like an engineering prototype than something actually sold to users, as those who saw it at the Open Day can attest. The machines were used and upgraded at the school over a number of years, and have relatively recent 64Kb Comart DRAM boards and Soroc VDUs.

The Pertec Altair hard disc unit is of the removable cartridge type used in minicomputers of that time - very solid and heavy. It is not yet running. The floppy disc software uses 16-sector hard sectored \linebreak 8 inch discs, and although it has booted successfully under its proprietary Altair Basic operating system (not standard CP/M), we are cautious about going further until we have found formatting and disc copying utilities for this unusual format, so that we can back up the master discs. A copy of Altair CP/M on regular soft sectored 8 inch discs would be useful - can anyone help?

I have also had a visit from Emmanuel Roche, who has been active in starting up a CP/M Plus ProGroup - a successor to the CP/M User Group - and is producing a monthly journal. This has already reached its fifth issue (numbered 4, the first issue having been numbered 0!). This contains much material of general historical interest - for example issue 1 is devoted to Alan Turing, including a reprint of the original 1936 Turing machine paper, and issue 2 contains a reprint of von Neumann's 1945 EDVAC report.

Readers who would like to obtain copies of this publication should write to Emmanuel Roche, 8 Rue Herluison, 10000 Troyes, France. I do not know how much each issue costs, but as Emmanuel is a student and is bearing the production cost himself, it would be appropriate to offer him something.

Emmanuel has also preserved the complete CP/M User Group Software Library (on 8 inch SSSD discs), a total of some 120Mb. I hope to have this on PC-compatible media before long.


TopPreviousNext

Forthcoming events


3 February 1993 In steam day
25 February 1993 Evening meeting
3 March 1993 In steam day
25 March 1993 Evening meeting
7 April 1993 In steam day
29 April 1993 Evening meeting
5 May 1993 In steam day
20 May 1993 Seminar on NPL and ACE
24 June 1993 Seminar on restoration of historic computers

In Steam Days start at 10 am and finish at 5 pm. Members are requested to let the secretary know before coming, particularly if bringing visitors. Contact him on 071-938 8196.

Members will be notified about the contents of the remaining eveningmeetings once the Committee has finalised the 1993 programme.All the evening meetings take place in the Science Museum LectureTheatre and start at 5.30pm.


TopPreviousNext

Committee of the Society


[The printed version carries contact details of committee members]

Chairman   Graham Morris FBCS
Secretary  Tony Sale FBCS
Treasurer   Dan Hayton
Science Museum representative   Doron Swade
Chairman, Pegasus Working Party  John Cooper MBCS
Chairman, Elliott 803 Working Party  John Sinclair
Chairman, Elliott 401 Working Party  Chris Burton
Chairman, DEC Working Party   Dr Adrian Johnstone CEng, MIEE, MBCS
Chairman, S100 bus Working Party   Robin Shirley
Editor, Resurrection   Nicholas Enticknap
Archivist   Harold Gearing

Dr Martin Campbell-Kelly
George Davis CEng FBCS
Professor Sandy Douglas CBE FBCS
Chris Hipwell
Dr Roger Johnson FBCS
Ewart Willey FBCS
Pat Woodroffe


TopPrevious

Aims and objectives


The Computer Conservation Society (CCS) is a co-operative venture between the British Computer Society and the Science Museum of London.

The CCS was constituted in September 1989 as a Specialist Group of the British Computer Society (BCS). It thus is covered by the Royal Charter and charitable status of the BCS.

The aims of the CCS are to

Membership is open to anyone interested in computer conservation and the history of computing.

The CCS is funded and supported by, a grant from the BCS, fees from corporate membership, donations, and by the free use of Science Museum facilities. Membership is free but some charges may be made for publications and attendance at seminars and conferences.

There are a number of active Working Parties on specific computer restorations and early computer technologies and software. Younger people are especially encouraged to take part in order to achieve skills transfer.


 


[8]ページ先頭

©2009-2025 Movatter.jp