[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: concurrency research "hot" again?
Interesting discussion. Firstly, I think that the objectives of
parallism, distribution and interaction should be regarded as distinct
topics. Secondly, I think that developing a new OS, like TinyOs or
Mantis - and using knowledge about creating OSs! - is certainly
something that will offer new insights. Thirdly, I think that the art of
programming itself has to change: 1. Robuustness in the form of parallel
code blocks that have the same functionality but are implemented
differently. 2. Robuustness in the form of self-modifing code. 3.
Robuustness in the form of self-inspecting code. Fourthly I think that a
solid theoretical background should combine at least some disciplines as
process algebra, scale space theory, information geometry and
cybernetics. Kind regards, Anne
On Thu, 2007-02-15 at 12:21 +0800, Andrew Delin wrote:
> my opinion is that concurrency must be managed at both the app and the OS level -- the latter because adding additional system resources means applications will benefit from more processing capacity without requiring reconfiguration. I believe that virtualisation has a role in this, meaning that in future we will increase and decrease "available CPU" dynamically. Also, the vast pile of "old apps" must be able to benefit transparently from execution a parallel core, which requires an OS approach to concurrency.
> I do believe that app programmers should have a grounding in CSP and functional languages, both of which lead to clearer thinking on parallel-capable platforms. And we need mainstream languages that do better than leaving the programmer to struggle with ThreadCreate().
> I would like to see a comprehensive OS that works in a few kilobytes. I recall a Unix-like thing that boots into 3mb, it is often used in elevators and other embedded settings. Can't remember the name, but I don't think it's enterprise class (eg security?)
> -----Original Message-----
> From: Andrzej Lewandowski [mailto:lewando@xxxxxxxxxxxxx]
> Sent: Thursday, 15 February 2007 11:55 AM
> To: tjoccam@xxxxxxxxxxx; Andrew Delin
> Cc: 'Allan McInnes'; 'occam list'
> Subject: RE: concurrency research "hot" again?
> "If OS size is in kilobytes.. " You are kidding, aren't you?... Or,
> maybe.... you are a PROFESSOR? This would explain everything...
> > -----Original Message-----
> > From: owner-occam-com@xxxxxxxxxx
> > [mailto:owner-occam-com@xxxxxxxxxx] On Behalf Of tjoccam@xxxxxxxxxxx
> > Sent: Wednesday, February 14, 2007 1:55 PM
> > To: Andrew Delin
> > Cc: Allan McInnes; occam list
> > Subject: RE: concurrency research "hot" again?
> > Maybe I'm sawing the same old violin, but...
> > I think the key to breaking out of the "incredibly difficult
> > to program in parallel" conundrum is to dump the baggage of
> > the last couple of decades and go back, not only to CSP, but
> > also to elegant (small) OS constructs. If OS size is in
> > kilobytes, there's hope you can understand COMPLETELY what it
> > is doing, especially if the OS restricts itself to resource
> > loading and leaves run-time concurrency to applications.
> > The other thing is to accept a 5 or 10 percent performance
> > hit in order to keep clear, provable, traceable resource
> > usage (i.e. eliminate spaghetti). The "hit" is actually not a
> > hit, because the cost of code tangles is really much more;
> > but if raw specs are applied, you can always do it just a
> > little faster by letting pointers and dynamic constructs go wild.
> > Larry Dickson
> > > Last week I attended a presentation by BillG where he also
> > raised the
> > > topic of insufficient semantic richness in today's
> > programming models
> > > - saying new developments are needed in programming
> > languages to use
> > > the parallelism of multi-core CPU designs. In the same
> > conference, the
> > > head of MS Research also talked about these challenges.
> > >
> > > About 6 months ago I sat through a presentation about options for
> > > parallelism in .NET today, and it wasn't pretty - way too
> > much locking
> > > litter and thread invocation for my liking. Having to
> > understand the
> > > behaviour of the compiler so that your process control statements
> > > don't get optimised away, isn't goodness.
> > >
> > > There is a proposed MS approach which seems to be a form of 'CPU
> > > transaction', where entire blocks of statements effectively compete
> > > for resources and the OS or hardware detects a livelock or
> > deadlock or
> > > other problematic condition. At this point, blocks of process state
> > > are reversed by hardware. I need to find out more about this. These
> > > techniques will probably need to exist if you want to build
> > a robust
> > > OS on top of multicore, where applications with different "parallel
> > > heritage" must run together. Nonetheless, the best approach for app
> > > construction is to start along CSP lines, not to rely on
> > the system to
> > > reverse out of trouble...
> > >
> > >
> > > -----Original Message-----
> > > From: owner-occam-com@xxxxxxxxxx
> > [mailto:owner-occam-com@xxxxxxxxxx]
> > > On Behalf Of Allan McInnes
> > > Sent: Wednesday, 14 February 2007 1:25 PM
> > > To: occam list
> > > Subject: concurrency research "hot" again?
> > >
> > > It seems that concurrency is again getting "mainstream" attention.
> > > I've seen several articles in the popular press over the
> > last few days
> > > touting Intel's
> > > new 80-core "teraflop-on-a-chip" demonstration chip. Most
> > of the articles
> > > I've
> > > seen have made a big deal out of how difficult programmers
> > will find it to
> > > program for 80 cores, and how lots of research needs to be
> > done to develop
> > > new
> > > techniques for programming parallel architectures (here's
> > one sample of
> > > the
> > > articles I've seen:
> > >
> > http://www.crn.com/sections/breakingnews/dailyarchives.jhtml?a
> > rticleId=197005746).
> > >
> > > At the same time, I've seen several links to "The Landscape of
> > > Parallel Computing Research: A View from Berkeley"
> > >
> > (http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html)
> > > show up on various websites that I check regularly. In that report,
> > > the folks from Berkeley
> > > say, among other things:
> > >
> > > "Since real world applications are naturally parallel and
> > hardware is
> > > naturally parallel, what we need is a programming model, system
> > > software, and a supporting architecture that are naturally
> > parallel.
> > > Researchers have the rare
> > > opportunity to re-invent these cornerstones of computing,
> > provided they
> > > simplify the efficient programming of highly parallel systems."
> > >
> > > So is research into concurrent programming becoming a hot
> > topic again?
> > > And how many of these research efforts are simply going to reinvent
> > > the occam wheel?
> > > The Berkeley effort, in particular, sounds a lot like the
> > occam/transputer
> > > approach (at least at a high level). However, the tech
> > report in question
> > > makes
> > > no mention of CSP, occam, or transputers (OTOH, they also
> > omit any mention
> > > of
> > > Berkeley's Prof. Ed Lee, who has done a lot of work on concurrent
> > > programming
> > > models via the Ptolemy project).
> > >
> > > It'll be interesting to see where this goes. Hopefully
> > it'll lead to
> > > an upswing in funding for projects that can claim to be working
> > > towards support for massive concurrency - like KRoC/nocc :-)
> > >
> > > Allan
> > > --
> > > Allan McInnes <amcinnes@xxxxxxxxxx>
> > > PhD Candidate
> > > Dept. of Electrical and Computer Engineering
> > > Utah State University
> > >
> > >
> > >