Thank you, Roger, for all the valuable considerations.On Nov 23, 2020, at 7:06 AM, Roger Shepherd < rog@xxxxxxxx> wrote: Larry
Following up on Tony's numbers with a reference to Uwe's numbers and Denis's poster and Roger's comment -
Starting with what we figured before, say 100,000 for a T800 minus memory plus 50,000/KB for 4 KB memory, recalculating for Tony's 32KB gives 1,700,000 per Transputer (almost all dedicated to memory), and 10000 of them is 17B transistors - short of Nvidia's 54B (are we missing a third dimension here?). However, Uwe suggests much more weight for the links (his version dedicates half its LUTs to links, while the other consensus was less than a third, even if not counting memory), and we hear from Roger, commenting on Denis's poster, that link transistors are physically BIGGER than others.
My comment was not about the size of the transistors; it was about the density of transistors within the real-estate dedicated to links. I don’t *know* why this is but it is likely that the density is limited by wiring. This is a perennial problem - the manufacturing process used for the transputer had (from memory) 2 layers of interconnect - 1 metal and 1 polysilicon. Modern processes have a lot more capability - perhaps 10 layers or wiring. In practice, even with this, logic density is limited by interconnect, not by transistors. “Transistors are free” isn’t quite true, but they aren’t the critical resource in modern designs. Again, the constraints on interconnect are such that local (same clock domain) is cheap and fast, non-local is expensive and slow.
Purely distributed design, like networks of Transputers, then has a big advantage IN PRINCIPLE. N.B. Throughput is cheap - “just” go parallel, it’s a matter of economics. Latency is hard, you’re up against the laws ofPhysics.
It shouldn't necessarily be that bad - IN PRINCIPLE, again, it's a "logarithm" not a "square root" - you use hyperbolic geometry (not Euclidean geometry) for your network. This can be done with as few as three links per node (triangular grid, but more than six triangles around a point - three, four, or five gives you Platonic solid, six gives you Euclidean, more than six hyperbolic). Of course you have to fit them on the die somehow ;-)
And some more transistors are needed - I don't know how many - for Tony's extra interconnects. The fact that geometry will be made simply repetitive will help, and distributed resources helps immensely.
So we have a bunch of variables here. But by backing off the 32KB memory (need analysis of use cases here) we gets lots more Transputers, almost six times as many. When I say "use case" I think climate modeling is a good thought exercise. And it's true that Nvidia claims 5 petaflops, which works out to 500 gigaflops per core, which seems high. But these are all very creative questions.
“Use case” really matters. We know that for some domains, specialised machines do very well (GPUs, ML/AI-PUs) do very well. You have to get the balance between compute, store and communication right. If you’re processor is unpowered you have to use too many of them, causing you problems with storage and communication. Remember, the program has to be stored locally, and so requiring double the number of processors because of limited computation capability means twice the store dedicated to program.
I think we can get smart here by giving some ground on "locally". Remember, in CSP any number of processes can share READ-ONLY memory, so you can have a sequence of "loading state" and "running state" (like the Transputer worm), and during running state a big block of read-only memory with the code is shared by, say, 100 nodes (each running the same, or almost the same, program). This requires a bit of design attention, because computer science says "any number read in parallel" but in the real world some sequences are involved.
Use cases would be at the center because there would be a manufacturing process that cheaply varies parameters to create a chip optimized for any given use case.
You’ll also need more communication capability to deal with the number of processors. It’s absolutely the case that the transputer processor is underpowered by today’s standards - I don’t know how by much.
I wonder if this is true - if you analyze it in time units of clock cycles per single core. I don't think it is true, if you analyze it in clock cycles per million transistors. So, in your budget for this device, you need to allow many more transistors for the processor, more for RAM - I’m sure 32k is too small - and a lot more for interconnect. The system structure is likely to be “transputer” (processor, RAM and limited comms) and routers.
The nearest machine I know of that might have this sort of architecture is the Graphcore Colossus ( https://www.graphcore.ai/products/ipu). 60B transistors, 1472 processor cores, each with 900MB.
At 6 transistors per bit, 1472*900M*8 bits comes out too many transistors. They must mostly be shared. It’s comms system provides non-blocking, any communication pattern (so arguably, under-reassured as a general purpose machine - programs have to be bulk synchronous - which is a problem).
That is where design on the program level comes in. Hyperbolic networks can be used for comms, though path length matters, and some fancy footwork to avoid echoes. Again, use cases matter. My notion only needs about 100 nodes to be synchronous, but maybe could go to 100,000 or so (conceptually) with a little work on a die.
Larry |