[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Scientific processors
- To: Larry Dickson <tjoccam@xxxxxxxxxxx>, Roger Shepherd <rog@xxxxxxxx>
- Subject: Scientific processors
- From: Denis A Nicole <dan@xxxxxxxxxxxxxxx>
- Date: Wed, 25 Nov 2020 09:28:48 +0000
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=ecs.soton.ac.uk; dmarc=pass action=none header.from=ecs.soton.ac.uk; dkim=pass header.d=ecs.soton.ac.uk; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OsA8gZmlHJ1uu1vcaP8e7X4UrbU+6aBFP81Jyx1TukI=; b=J2fC8w8LlsnN/GO5MhOdZ2y6FCOrv57+gFIsYAkUODebXR2nRo0sRe2AjDs3xgg0D1HCBtXVj9vnusKAN41P/25EOQB17Kut3YbAPNdMs3IIT+pkV/iu2B2hLgxC0wSLdxL7eT4RDYz7Cs4h7Yzp9bs0Hw1cPtFQlUW3RSv4zb0qQ+noSLpDqf+P4PQz8RcaUqprQw21cUZyMhnqCFn9dV/wW1OAQ13pi17D8pyYKboBz5zCY8Z/3nywoRY76ytNNSOX6+CtF0ukCK+6DVgclHsLBh4qr2pWOkH4+ROsMARMZLecaPatFoDbQoDKPB/uwyye3MA9HZA6FzSK7jHt2g==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZDBzyjZ0OAIrhZlJuE6pRI6NYpKfa/gE7EJwRKZEJW80k46nM1EMI95EQN1bAiLz+cDaC8JpszGRd8UoABoOTuGIKUaDBK3bMHJi8dFvFz0dQDWKkQVgGS7JIYpSvv3M8EDPyjBk+os5sI3wsmvF2FTslj8pVinWMlVrLQHvCpKobC3M1MhhZQrENrdJCu4MEsOp115WwZlTyH64KF5F6lzzKMNSvdaH6jMCK7yEYBil6gM2PleEhB4VgUBtKwR3vDjw5tkz195sZ8710w5SBFfRnhqsK2A/Vmnk2naV/VF5vXgVC98V4QSJhTPHYotdyuL372dmx0x2cANi9yKT3g==
- Archived-at: <https://lists.kent.ac.uk/sympa/arcsearch_id/occam-com/2020-11/ecae4c9a-a974-e22f-0472-da0bac10f948%40ecs.soton.ac.uk>
- Authentication-results: googlemail.com; dkim=none (message not signed) header.d=none;googlemail.com; dmarc=none action=none header.from=ecs.soton.ac.uk;
- Cc: Øyvind Teig <oyvind.teig@xxxxxxxxxxx>, Tony Gore <tony@xxxxxxxxxxxx>, Ruth Ivimey-Cook <ruth@xxxxxxxxxx>, occam-com <occam-com@xxxxxxxxxx>, Uwe Mielke <uwe.mielke@xxxxxxxxxxx>, David May <David.May@xxxxxxxxxxxxx>, Michael Bruestle <michael_bruestle@xxxxxxxxx>, Claus Peter Meder <claus.meder@xxxxxxxxxxxxxx>
- Delivery-date: Wed, 25 Nov 2020 09:29:13 +0000
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sotonac.onmicrosoft.com; s=selector1-sotonac-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OsA8gZmlHJ1uu1vcaP8e7X4UrbU+6aBFP81Jyx1TukI=; b=oNv8SNXAr9z2mJAWIRuJMjH6frvZCp8l+uUY8+Gzdix74/yoVatC3HvD/UUACLBhpeBZnk/pp9VSybNg/ttNFE54/v+PTcShzOwa1GfOGcp3gd56iqtmXuFp1a5lzDmIiRn5SyKnaK+iz620+4FXWatC9hox6N0tO4fJE4gboQw=
- Envelope-to: ats@xxxxxxxxx
- In-reply-to: <A8FEFDFC-CFD6-4959-864D-572C371CCA68@tjoccam.com>
- List-archive: <https://lists.kent.ac.uk/sympa/arc/occam-com>
- List-help: <mailto:sympa@kent.ac.uk?subject=help>
- List-id: <occam-com.kent.ac.uk>
- List-owner: <mailto:occam-com-request@kent.ac.uk>
- List-post: <mailto:occam-com@kent.ac.uk>
- List-subscribe: <mailto:sympa@kent.ac.uk?subject=subscribe%20occam-com>
- List-unsubscribe: <mailto:sympa@kent.ac.uk?subject=unsubscribe%20occam-com>
- References: <C873035A-2E4D-4079-A7BA-D02635B6558E@tjoccam.com> <2C8378F8-1237-46E9-A9A0-E6034B85C050@tjoccam.com> <7DE3278B-D84D-4CEA-B90F-75FFB28D7D57@rcjd.net> <EF490EDC-611F-44BD-879B-95923FB47496@teigfam.net> <6364CB26-883F-4B91-88B7-997DDCC49760@teigfam.net> <6A533325-949F-425A-9A3B-0400B3CE4F7D@tjoccam.com> <VI1PR05MB5903F0DB8738AF40CABF60D3E0FF0@VI1PR05MB5903.eurprd05.prod.outlook.com> <C9038418-4AD5-4DB3-A7AC-2C9799242792@tjoccam.com> <6E379737-0600-45AF-BFC5-073A3526C2C2@rcjd.net> <F881DCB9-A9EA-468F-88E3-CDA3CB457FBF@tjoccam.com> <DBE32399-FF55-4F22-A847-A37F2E5DF3C1@rcjd.net> <2222a3c3-4e6f-cfee-e5ce-24c65b1ee06d@ivimey.org> <05D297F6-7933-4349-876B-573D7A26D1DD@tjoccam.com> <VI1PR05MB590305A50446979450C1E584E0FB0@VI1PR05MB5903.eurprd05.prod.outlook.com> <82C57044-03D3-4894-A0F5-7A6A8FACE183@teigfam.net> <15371949-5B0F-47DB-BD26-DDE91EE9ED85@tjoccam.com> <22F803FF-B74C-4CA0-B0F5-299C79A95DE3@rcjd.net> <A8FEFDFC-CFD6-4959-864D-572C371CCA68@tjoccam.com>
- Reply-to: Denis A Nicole <dan@xxxxxxxxxxxxxxx>
- Sender: occam-com-request@xxxxxxxxxx
- User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0
Hi All,
After I stopped working on Transputer-based machines, I ran a High
Performance Computing Initiative centre supporting mainly
natural environment modelling, particularly global ocean
circulation. I also ran the benchmarking for two generations of UK
national research supercomputers. All this was nearly twenty years
ago, but some knowledge from that generation may still be useful.
Here goes:
1. Traditional scientific computing is completely dominated by
linear algebra. Supercomputer vendors love to sell their wares
using Linpack, which is a dense matrix inversion code.
Real scientific computing is, however, dominated by the solving of
Partial Differential Equations, either in "real" space, or
in "Fourier" space (the so-called spectral methods). Real space
solvers work on a sparse matrix; spectral solvers do a
lot of fast Fourier transforms. So, almost all the actual
calculation can be done well using fused multiply-accumulate
instructions; it can also often be organised to work well with
short vector operations. Depending on what is being solved, there
can be a lot of "housekeeping", e.g. because the FFT is on a
sphere (global ocean), because there is some sort of model-defined
mesh (e.g. for engineered structures), or because the computation
has to iterate over different length scales (multigrid methods).
Within a compute node, the trick is to control the housekeeping so
the FPUs can let rip.
2. Integer multiply used to be important for indexing into
multi-dimensional structures. As I remember it, the IBM RS/6000
was the first microprocessor to have strength reducing
compilers that were so effective that the integer multiplier could
be made slower and cheaper. I guess, however, that integer
multiplier size does not matter so much nowadays.
3. In most systems, the real load is taken by the floating point
units. The IEEE standard is important here for several
reasons.
- Floating multiply is relatively easy as the mantissa is
shorter than for integers and there is not much normalisation to
do. In contrast, addition can require extensive shifting and
exponent correction. Division is a pain but is rarely necessary;
after all, any complex _expression_ can easily be converted into
one with only a single division. Floating point makes the
necessary scaling relatively easy.
- Support for denormalised numbers can be important for
precision, for stability, and for "compliance". Would you
believe a climate prediction performed on a machine whose
arithmetic does not conform to the established standard? It used
to be routine for the denormalised number handling to run
through a slow microcoded path. That can be a performance
disaster. Consider, for example, an infinite impulse
response filter in which the key operation is
yn+1 = yn
* 0.8 + xn * 0.2
If the input x goes to zero, y will decay into a small
denorm. number and settle there; it will never fall to zero. If
this were a filter on an audio input, all would be well while
you kept talking but, when you turn off the mic., the filter
runs much slower. On an Opteron, this could be a
thousand times slower. The difficulty is compounded for vector
operations, as in a multimedia unit, where the whole
vector is slowed down by one denorm.
- Floating point arithmetic is famously not associative. This
heavily restricts the optimisations which can be performed while
retaining bit-for-bit identical results. You either accept that
the answers can change, write your code very carefully to
pre-implement the optimisations, or go slow.
- Neither integer nor floating-point arithmetic is "complete";
there are overflow and divide-by-zero exceptions. What are we
going to do about them? Actually throwing an exception is a
performance nightmare. In the FPU, we can use the various
infinities and NaNs to keep the numbers flowing through, but are
NaNs on the fast path? Or are they also microcoded?
- Given that most of what we will actually do will be
multiply-accumulates, we might think about using a full-length
accumulator for floating addition; we run the accumulator as a
very long scaled integer. That will be more "accurate" and more
associative, but is not very "standard", although efforts have
been made.
4. Getting bit-for-bit matching answers from consecutive runs is
really difficult. Obviously, we need to seed our PRNGs
consistently, but there are all sorts of internal code and
optimisation issues that break things. This leads to real
difficulty in verifying benchmark tests. Overlaid on this are
sometimes genuine instabilities in important scientific codes. For
example, global ocean models can be very hard to "spin up"; you
need exactly the right initial conditions or the circulation never
converges to that of the planet we know. This may not even be a
problem in the models; perhaps the real equations of motion have
multiple fixed points? There are similar difficulties in molecular
dynamics around hydrogen bonding. Sadly, that is the case we
care about most; it covers protein folding in hydrated systems.
5. Some important computations depend on long-range interactions.
I have already mentioned multigrid methods, but there are
real physical systems that depend on long range "forces". Two that
spring to mind are Fermion calculations, where the wave
functions have to be anti-symmetric (you can't put two electrons
in the same state), and meteorology, where vertical radiative heat
transfer between layers is important.
6. Various sorts of multiphysics calculations depend on
coupled interactions between very different length or time scales.
Coupled ocean and climate models are a good example; the deep
oceans provide a much longer term store of heat, salinity
layering, and CO2 than the atmosphere. This all
complicates the housekeeping.
7. Most science needs to use lockstep "{calculate communicate}
repeat" computation, perhaps with overlapping and some red-black
cleverness. This requires the computational step to take the same
time on every processor. The FPU issues in "3" can cause a
problem; other difficulties can arise when cache aliasing is worse
in one particular processor. A famous historic example involved a
global virtual address space shared over a large parallel
computer. On exactly one processor, the code for the inner loop
aliased on top of the region of data it was using; that one
processor ran slowly and held up the whole system.
8. So, it's quite hard to get an interesting model to work at
all. Things get even harder when we add constraints such as a high
degree of parallelism or limited local memory. It is entirely
likely that an established model will have just a few parameters
that can be "tweaked" for various architectures. These typically
allow for cache sizes, for the number of processors, for the
length of MPI (message passing interface) communications,
but not much else. If you design something radical that requires
substantial porting of applications, it won't get used. In that
context, I see that a lot of GPUs are finding their way into
established supercomputers, but I do not have any clear data about
their effectiveness.
Best wishes,
Denis Nicole