This email may contain proprietary information of BAE Systems and/or third parties.
Larry, I am sorry to muddy the waters, but ….. On the first point, for many, possibly most, of the computational problems that we can currently do, you are probably right about the locality, but we have
found the trying to work with unstructured meshes breaks the mapping to simple structures, and locality becomes something you cannot rely on. We are working on another problem extending our current capability to be able to model the heating effect of a lightning channel or, more generally, an electric
arc. We have to cope with conducted heat, (and incidentally, resistive heating in material near the point of contact with an electric arc) but also we have to add radiant heating. The first two can be considered as propagation problems with near neighbours,
but the latter is not. I could try to propagate the radiant component as another EM wave but I run into other problems of directionality – perhaps I could do it with a kind of model of the Poynting vector, I don’t know. My original idea was to superimpose
ray tracing on the EM and thermal propagation models, and that will definitely break the locality. Your last point links with this. I don’t think there is a strong analogy with the photons. They will be there whether people turn their eyes in the right
direction to receive them or not, and since those photons are a) in the trillions, b) are massless, c) do not interact with anything, etc. it does not matter that they are there, unless you are trying to sleep. I cannot see a viable way of doing this with
data from memory. Turning the problem around the other way so that potential viewers can pick out the photons/data they want is either (surely) in the nature of Random Access Memory, but not without interactions, or (in cases where your analogy is closest),
riven with difficulties from a computability point of view – generally we are not interestied so much in the photons that get to us without impediment, nor those that are stopped by an impediment. The interesting and useful ones are those that only just creep
by or touch impediments and are affected by their close encounter, e.g. scattering. Finding these edges is really hard and I have not yet come across a viable algorithm. I know we should not push analogies too hard and perhaps I am being too critical, but I think you raise some interesting issues which have some real impact
for what I am involved in. Regards, Chris
Prof. Christopher C R Jones BSc. PhD C.Eng. FIET
BAE Systems Engineering Fellow
EMP Fellow of the Summa Foundation Principal Technologist – Electromagnetics
Military Air & Information
(
Direct: +44 (0) 3300 477425 Electromagnetic Engineering, W423A
(
Mobile: +44 (0)7855 393833 Engineering Integrated Solutions
7
Fax: +44 (0)1772 8 55262 Warton Aerodrome
* E-mail:
chris.c.jones@xxxxxxxxxxxxxx
Preston
:
Web:
www.baesystems.com PR4 1AX BAE Systems (Operations) Limited From: occam-com-request@xxxxxxxxxx
[mailto:occam-com-request@xxxxxxxxxx] On Behalf Of Larry Dickson
This email has been sent from an account outside of the BAE Systems network.
My simple notion obviously creates an agenda for additional work on some basic problems. Some comments to Roger's questions: On Nov 25, 2020, at 2:00 AM, Roger Shepherd <rog@xxxxxxxx> wrote:
Larry,
On 25 Nov 2020, at 01:19, Larry Dickson <tjoccam@xxxxxxxxxxx> wrote: Roger, I don’t understand. I’m missing the model of computation you are thinking of. I believe that interesting problems don’t map on to simple structures I believe most of them do - climate modeling, classical physics of structures, AI, fluid flow . . . all local. Others have brought up quantum problems which do have nontrivial action at a distance, but that is just one topic of many. Whether
"local" = "simple structures" is of course subject to dispute, but I think with some creativity it can be made to work. (One of the "basic problems" . . . )
and the only way to make those problems programmable is to support an abstract model of programming where you’re able to address problems with non-local communication. That amounts to accepting defeat in my opinion. It requires centralization of everything and reduces effectiveness by multiple orders of magnitude. (Compare (clock speed)*(transistor count) increase since, say, 1985, to the increase in
what computers can actually do. Almost all the advances have been in raw repetition, like screen resolution and data transmission speeds - totally "simple structures".)
Proposals were made around 1990 for how to do this and how to hide any mismatch between the physical connectivity of the world and the logical connectivity of a program. There’s also a lot of understanding of how communication networks
behave and what we need to do to make them effective. Low connectivity leads to longer latency which leads to a requirement for a high degree of excess parallelism, which increases the size of the problem you can tackle. Or to put it another way, limits the
benefit if parallelism for small problems. This is a practical, engineering issue. Agreed, but mostly people have given up rather than attempting to solve it. The exception is GPUs (and maybe AI?) and GPUs have been pretty successful. The alternative, as you say, is "an abstract model of programming" which has imposed
something like a fifth root law on our computing progress (i.e. resources increase by 100,000 and usable power increases by 10). Breaking out of this trap deserves some effort.
Of course I am ignoring data throughput questions, but they can be dealt with using extra gatherers without increasing the logarithmic latency. It scales every day to large problems - check out your high-resolution GPU.
As for 100 nodes sharing read-only code, given an instruction cache on each node, it ought to be doable with FIFOs. Has anyone really tried? It seems to be no different from lots of processes accessing a solid state disk, and we do that
all the time. When do we do this? Sure there can be a lot of operating system processes accessing a disk, but they absolutely aren’t running at the same time - and if you can happily time multiplex a processor why replicate it? It is purely a question of resources. I am thinking of "big code, small data" problems (data means alterable data), into which class many scientific problems fall. Here's why I believe it is a soluble problem. Think of 100 people at a conference watching a slide show on a big screen. They all view simultaneously, and can focus on the same part of the slide, or different parts, or both, with no interference.
We all take this for granted, but from an information flow point of view it is interesting. A lot of photons are flying. My notion is to do something analogous with read-only memory, and we have quite a bit of resources to work with - say 30% of the local
resources of 100 processors - the idea is to make it happen very fast with resources that are small per processor, but may be considerable when totaled over 100 processors. Larry
Roger
Larry On 24 Nov 2020, at 16:56, Larry Dickson <tjoccam@xxxxxxxxxxx> wrote:
Larry
******************************************************************** BAE Systems may process information about you that may be subject to data protection |