The design of the AGC went through several stages, which for
the sake of this discussion we'll call "blocks". However,
the different block designs, while similar in many ways, were
quite different in others. Rather than try to cover all of
them on one page, the different blocks are covered in separate
However, this page confines itself just to the Block 3
The first thing to understand about the Block III AGC is that
there's no such thing. However, if you have had the
patience to read much of what we have to say on this site about
the architecture of the Block I AGC (used only in a handful of
unmanned missions) and especially the Block II AGC (used in all
the manned missions), you will see that certain aspects of the
architecture seem to have led to related inefficiencies in the
software, and that the system could perhaps have had a
more-efficient architecture from the ground up.
Of course, in blithely stating that, we shouldn't pretend that
that fact should have been obvious in advance, nor that there
weren't inescapable forces that led to this kind of
situation. For one thing, the "feature creep" (over which
the developers had no control) on the original development
project, combined with the need to maintain a certain backward
compatibility in order to keep from having to rewrite the
software over and over again, combined too with the inherent
multiplicative complexity in increasing instruction and memory
word-lengths at the time, would have forced the team to allow
only incremental changes to the architecture.
However, that doesn't stop us from speculating about what a
better Block III system might have looked like, if the
developers were freed from these kinds of restrictions. But
as it happens, we don't need to speculate about it! As he
explains in a footnote in his book,
Left Brains for the Right Stuff, original AGC developer Hugh
Blair-Smith put together some notes for a hypothetical Block III
AGC architecture that he hoped would have doubled the processing
speed, and (to quote his words)
"I did the thinking and writing just a few years ago, but strictly in terms of the mid-\9260s technology we actually used. Probably the greatest risk of non-fidelity is in power consumption, because I take advantage of the Harvard architecture to cycle both fixed and erasable memories simultaneously whenever practical (which is mostly)."
"I plan to publish this fantasy on-line for the amusement of all the people around the world who enjoy building hardware or software replicas of the AGC".
Actually, only the second quote is from the book. While
I don't know that "enjoy" is the word I would necessarily use for
this activity, Hugh has in fact sent his notes on the topic to
us, and we present them here for your delectation.
A fascinating idea, obviously. One might wish for not merely a port of one or more of the Block II AGC programs, but perhaps even a Block III assembler and simulator. If you're such a person, give it some thought. Perhaps it would be your road to fame!
"What I\92d love is for some too-much-time-on-their-hands people to port, say, Luminary 099 from Block II to Block III and see how much shorter and faster it is!"
Hugh's notes comprise two Word documents,
and since they haven't exactly been put into a finalized form,
he has also taken the trouble to describe the best way of reading
them. I quote, though a brief summary might be, "Don't
start with the testing document":
"... you\92ll probably find the testing document pretty opaque, since it records the output of a simulator that existed only in my headbone. In reading the doc, I would recommend starting with the general discussions at the beginning, then skipping over the NI (Native Interpreter) code examples in the middle initially, and going to the \93regular\94 interpreter code examples toward the end—the idea being to get your head around how Block III performs the same kind of interpreter as Block II does. Then dive bravely into the NI part to see how it interprets the two-word format in RAM of what would otherwise have been one-word native instructions in ROM. If you were a professor, you could assign a hapless grad student to do that! The main reason I spent so much design effort on the NI was to make sure the native design could handle it without too much ugliness, and I did make several enhancements to the native design as a result.
"For relatively short pieces of code, like those in the document, it should be possible to have a fairly satisfactory experience just writing source code and eyeballing it for running time in MCT (as well as for what I might call Nortoning)."