[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Distributed shared channels (via SEMAPHOREs)



Dear All,

Here's how to implement a SHARED channel over a distributed-memory system
(such as a network of workstations or transputers) using SEMAPHOREs.

--{{{  First define a SHARED channel

Here is an example SHARED channel in occam3:

  CHAN TYPE COMMS
    CHAN OF REQUEST request:
    CHAN OF REPLY reply:
  :

  SHARED COMMS c:

The SHARED channel has two components: the REQUEST field carries information
from a client to the server and the REPLY field carries answers.  [In general,
occam3 permits a SHARED channel any number of components; in practice, given
occam2 variant PROTOCOLs, only two are ever necessary.]

In occam2.1, we don't have the CHAN TYPE and have to declare separate channels:

  CHAN OF REQUEST c.request:
  CHAN OF REPLY c.reply:

Now, SHARED channels connect any number of client processes to a single server
process.  Clients have to queue for exclusive access to the server.

--}}}

--{{{  Implications for distributing a SHARED channel

In a distributed system, clients may be placed on many processors and the trick
is to maintain a distributed queue.  If the processors do not share a unified
memory space, we need to do some work ... though not too much.  In the (late
lamented) T9000 transputer, the `distributed' queue is held entirely on the
processor containing the server of the SHARED channel (as a queue of RESOURCE
channels, one for each client process).  We are going to do it with SEMAPHOREs
and the queue really is distributed.

--}}}

--{{{  On a processor with clients of the SHARED channel

Consider a processor containing some of the clients of the SHARED channel.
For simplicity, suppose it only contains such clients:

  PROC client.processor (VAL INT my.processor, n.clients, SHARED COMMS c)
    --{{{  
    PAR n = 0 FOR max.clients.per.processor
      IF
        n < n.clients
          client (n + 1, my.processor, c)
        TRUE
          SKIP
    --}}}
  :

The above is occam3 code for placing on a remote processor.  The code allows
the easy creation of remote processors carrying different numbers of clients
(which happen, in this example, to be identical).  We have to set a compiler
known `max.clients.per.processor' for the usual reasons.  The `n + 1' and
`my.processor' values are passed on to the clients for information and have
nothing to do with the operation of the SHARED channel.

For the SEMAPHORE implementation of the distributed SHARED channel, we need
one (virtual) link from each processor-with-clients to the processor holding
the server.  The clients on a remote processor queue up on a local SEMAPHORE
to gain access to this virtual link.  Here is the occam2.1 version of the code
for the remote client processor:

  PROC client.processor (VAL INT my.processor, n.clients,
                         CHAN OF REQUEST request,      -- the SHARED
                         CHAN OF REPLY reply)          -- channel-pair
    --{{{  
    #PRAGMA SHARED request
    #PRAGMA SHARED reply
    
    SEMAPHORE s:                -- local SEMAPHORE on which local clients queue
    #PRAGMA SHARED s            -- to use the shared request-reply channels
    
    SEQ
      initialise.semaphore (s, 1)
      PAR n = 0 FOR max.clients.per.processor
        IF
          n < n.clients
            client (n + 1, my.processor, s, request, reply)
          TRUE
            SKIP
    --}}}
  :

To follow the behaviour of the client, we need to know the PROTOCOLs.  Suppose:

  PROTOCOL REQUEST
    CASE
      string; BYTE::[]BYTE; INT         -- any number of these
      integer; INT; INT                 -- in any order.
      end.transaction                   -- finish with this
  :

  PROTOCOL REPLY IS BOOL:   -- returned by server after each string/integer

Each client makes periodic CLAIMs on the SHARED channel.  An occam3 example is:

  PROC client (VAL INT n, my.processor, SHARED COMMS c)
    --{{{  
    INT logical.time:
    SEQ
      logical.time := 0
      WHILE TRUE
        SEQ
          ...  wait n seconds
          CLAIM c
            --{{{  transact business with server
            BOOL any:
            SEQ
              c[request] ! integer; logical.time; 0
              c[reply] ? any
              ...  further possible business
            --}}}
    --}}}
  :

In occam2.1, we make one difference to the implementation of the CLAIM from
that described for the non-distributed SHARED channel.  The CLAIM body now
opens with an extra output, signaling (to the other side) that we are ready
for the start of play:

  PROC client (VAL INT n, my.processor,
               SEMAPHORE s,                  -- this is
               CHAN OF REQUEST request,      -- the SHARED
               CHAN OF REPLY reply)          -- channel-pair
    --{{{  
    INT logical.time:
    SEQ
      logical.time := 0
      WHILE TRUE
        SEQ
          ...  wait n seconds
          --{{{  claim the shared channel
          claim.semaphore (s)
          --}}}
          --{{{  start-of-play signal
          CHAN OF INT request RETYPES request:
          request ! 0
          --}}}
          --{{{  transact business with server
          BOOL any:
          SEQ
            request ! integer; logical.time; 0
            reply ? any
            ...  further possible business
          --}}}
          --{{{  release the shared channel
          release.semaphore (s)
          --}}}
    --}}}
  :

Note that this `start-of-play' signal just needs an output component of the
SHARED channel and is independent of its protocols.  This would be no bad
thing to include in any case for a non-distributed CLAIM -- since it makes
the generated code independent of distribution and guarantees a signal that
always enables the server to ALT (even if the transaction proper opens with
a communication from the server to the client).

--}}}

--{{{  On the processor with the server of the SHARED channel

On the processor containing the server, there will be an array of external
(virtual) channel-pairs -- one from each processor that holds clients.  These
channel-pairs are handled directly (one at a time, of course) by the server,
but only after the server has been granted access by one of an array of small
`client.proxy' processes.  These proxies also handle the (incoming) virtual
channels, but not (of course) at the same time as the server.  The sharing of
an incoming virtual channel between its managing proxy and the server is
managed by normal channel synchronisation and not by a SEMAPHORE.

Here is occam2.1 code for a processor containing only the server for a single
distributed SHARED channel:

  PROC server.processor ([]CHAN OF REQUEST request,    -- the SHARED
                         []CHAN OF REPLY reply,        -- channel-pair
                         CHAN OF BYTE out)
    --{{{  
    #PRAGMA SHARED request           -- shared between the proxies and the server
                                     -- (note: this sharing is not controlled by
                                     -- a SEMAPHORE, but by taking turns that
                                     -- are synchronised through normal channel
                                     -- communications)
    
    SEMAPHORE grant.s:               -- controls access to the grant channel
    #PRAGMA SHARED grant.s           -- between the local proxies
    
    CHAN OF INT grant:               -- this is the SHARED channel between the
    #PRAGMA SHARED grant             -- local proxies and the server
    
    SEQ
      initialise.semaphore (grant.s, 1)
      PAR
        --{{{  local proxies
        PAR n = 0 FOR n.client.processors
          CHAN OF INT request RETYPES request[n]:
          client.proxy (n, request, grant, grant.s)
        --}}}
        --{{{  server
        server (grant, request, reply, out)
        --}}}
    --}}}
  :

A picture helps:

  --{{{  (occam2.1) data-flow diagram for a processor containing a server
  
             _______________________________________________________
            |                                                       |
            |                                          ||           |
            |                    ----------->----------||           |
            |                    |  -------->----------||           |
            |                    |  |                  || grant     |
            |                    |  |     -->----------||           |
            |                    |  |     |       _____||_____      |
            |        _______     |  |     |      |            |     |
            |   -->-| proxy |-->--  |     |      |            |     |
            |   |   |_______|       |     |      |            |     |
    rr[0]   |   |                   |     |      |            |     |
   -<-->----------------------------|-----|------|            |     |
            |                       |     |      |            |     |
            |        _______        |     |      |            |     |
            |   -->-| proxy |-->-----     |      |            |     |
            |   |   |_______|             |      |            |     |
    rr[1]   |   |                         |      |            |     |   out
   -<-->----------------------------------|------|   server   |---------->--
            |                             |      |            |     |
            |           .                 |      |            |     |
            |           .                 |      |            |     |
            |           .                 |      |            |     |
            |        _______              |      |            |     |
            |   -->-| proxy |-->-----------      |            |     |
            |   |   |_______|                    |            |     |
    rr[n-1] |   |                                |            |     |
   -<-->-----------------------------------------|            |     |
            |                                    |            |     |
            |                                    |____________|     |
            |  server.processor                                     |
            |_______________________________________________________|
  
  --}}}

Note that the proxies are local clients to the server via a (non-distributed)
SHARED channel called `grant'.  Hence, they need a local `grant.s' SEMAPHORE.

Note also that each proxy is attached only to the incoming external channel
and that it is independent of PROTOCOL.  This means that these proxies are
generic for *any* kind of SHARED channel.  They are also very simple:

  PROC client.proxy (VAL INT n, CHAN OF INT request, grant, SEMAPHORE s)
    --{{{  
    WHILE TRUE
      INT signal:
      SEQ
        request ? signal          -- remote claim
        claim.semaphore (s)       -- queue up for the server
        grant ! n                 -- tell server where client is located
        grant ! n                 -- wait until server has dealt with client
        release.semaphore (s)     -- let others get the server
    --}}}
  :

Initially, `client.proxy' listens to the external `request' and the `server'
does not.  The `server' is either listening on the `grant' channel or dealing
with a client on a line from another processor.

When a signal arrives on the `request' line, this indicates that there is a
client on the processor at the other end ready to transact business with our
`server'.  The `client.proxy' queues up (with other proxies) on the SEMAPHORE
guarding the SHARED `grant' channel.  When it acquires this SEMAPHORE, it sends
its index (supplied as a parameter) down the `grant' channel to the `server'.

The `server' uses this index to commence its transaction on the indicated pair
of `request'/`reply' external channels.  At this point, the `client.proxy' is
not listening on the `request' line (which would cause an error), but is trying
to repeat its `grant' communication.  This causes it to wait until the `server'
has completed its transaction with the remote client, only after which does it
accept this second message.  The `server' is no longer listening to `request',
so it is safe for the `client.proxy' to release the `grant' SEMAPHORE (letting
other proxies get at the `grant' channel) and resume listening on `request'.

Here is the `server' code:

  PROC server (CHAN OF INT grant,              -- this is
               []CHAN OF REQUEST request,      -- the SHARED
               []CHAN OF REPLY reply,          -- channel-pair
               CHAN OF BYTE out)
    --{{{  
    WHILE TRUE
      INT processor:
      SEQ
        grant ? processor   -- the server may freely ALT on this shared channel
        service (request[processor], reply[processor], out)
        grant ? processor   -- return control of `request[processor]' to the proxy
    --}}}
  :

where we have abstracted the actual transaction code into a procedure.

The code for setting up the `server' and the `client.proxy' processes is not
user-written, but would be generated by the compiler/configurer.  The `server'
may, therefore, trust the `processor' value supplied down its `grant' channel.

Note that the `server' listens to an ordinary software channel (`grant'), which
tells it on which virtual link lies the processor containing the remote client.
Therefore, ALTing over SHARED channels is immediately possible.

Note also that, during the actual transaction, the remote client and server are
in direct communication over a single virtual link -- there are no intermediate
processes slowing things down.  The `client.proxy' only gets involved when
starting up and shutting down a transaction.  The overheads of this are very
small and, probably, negligible in comparison to those involved in the remote
communication.

For completeness, here is a `service' procedure for a single client-server
transaction:

  PROC service (CHAN OF REQUEST request, CHAN OF REPLY reply, CHAN OF BYTE out)
    --{{{  
    --{{{  COMMENT note
    --This performs a single client-server transaction.  It is application specific.
    --It is a 2-way transaction just to demonstrate that we can do it.
    --}}}
    BOOL transacting:
    SEQ
      transacting := TRUE
      WHILE transacting
        request ? CASE
          BYTE size:
          [255]BYTE s:
          INT field:
          string; size::s; field
            SEQ
              out.string ([s FOR INT size], field, out)
              reply ! TRUE
          INT i:
          INT field:
          integer; i; field
            SEQ
              out.number (i, field, out)
              reply ! TRUE
          end.transaction
            SEQ
              out.string ("*n", 0, out)
              transacting := FALSE
    --}}}
  :

Finally, here are the occam3 equivalents of the above routines.  The process
that sits on the server processor itself is a bit boring if it only contains
the server:

  PROC server.processor (SHARED COMMS c, CHAN OF BYTE out)
    --{{{  
    server (c, out)
    --}}}
  :

The server itself is not that much more interesting:

  PROC server (SHARED COMMS c, CHAN OF BYTE out)
    --{{{  
    WHILE TRUE
      GRANT c
        service (c, out)
    --}}}
  :

The service procedure is much the same as the occam2.1 version and only given
here in outline:

  PROC service (COMMS c, CHAN OF BYTE out)
    --{{{  
    --{{{  COMMENT note
    --This performs a single client-server transaction.  It is application specific.
    --It is a 2-way transaction just to demonstrate that we can do it.
    --}}}
    BOOL transacting:
    SEQ
      transacting := TRUE
      WHILE transacting
        c[request] ? CASE
          ...  as before but with `c[reply]' replacing `c.reply'
    --}}}
  :

Notice that its COMMS parameter is no longer SHARED.

--}}}

--{{{  An emulated distributed system

A distributed system demonstrating this SHARED channel may be user-configured
through the following table:

  VAL INT max.clients.per.processor IS 10:
  VAL []INT n.clients IS [4, 2, 8, 5, 3, 4]:  -- number of clients per processor
  VAL INT n.client.processors IS SIZE n.clients:

and defined with:

  [n.client.processors]CHAN OF REQUEST link.request:
  [n.client.processors]CHAN OF REPLY link.reply:

  PAR
    PAR n = 0 FOR n.client.processors
      client.processor (n, n.clients[n], link.request[n], link.reply[n])
    server.processor (link.request, link.reply, screen)

which occam3 would reduce to:

  SHARED COMMS link:

  PAR
    PAR n = 0 FOR n.client.processors
      client.processor (n, n.clients[n], link)
    server.processor (link, screen)

The occam2.1 version emulates this distributed system on a single processor
and runs immediately under KRoC on its released targets.  It produces quite
a pleasing display.

It will also run on real distributed systems such as T9000 networks (with
the general KRoC "semaphore.inc"), the multi-processor Alpha platforms made
by Alpha-Data Parallel Systems Ltd (http://www.alphadata.co.uk/) and (when
we release it :-) networks of workstations.

--}}}

--{{{  SUMMARY

    Client processes on a remote processor queue on a local SEMAPHORE
    to access the virtual link connecting its processor to the server
    processor.  When a client has accessed its local SEMAPHORE, it first
    signals down the virtual link to a proxy process that will be waiting
    at the server end.  Then, it tries to transact its business over the
    virtual channel, but will be held up until ...

    The proxy process, after receiving the signal from the remote client,
    queues up on a local SEMAPHORE in the server processor to tell the
    actual server process (using a locally shared `grant' channel) about
    the remote client.  Still holding the SEMAPHORE, the proxy tries to
    tell it again, which causes it to sleep until ...

    The client and server complete their transaction directly with each
    other.  When the server finishes, it clears the signal from the
    waiting proxy (which releases the `grant' SEMAPHORE and goes back
    to listening on its virtual link) and listens on the `grant' channel
    for further work.  When the client finishes, it releases its local
    SEMAPHORE, allowing sibling clients to access the virtual link to
    the server processor.

--}}}

--{{{  The `weak FIFO queue'

So, client processes queue up on a distributed `queue' for the distributed
shared channel.  The `queue' is made up from real FIFO queues (one in each
processor containing clients) that are processed through another real FIFO
(of client proxies in the processor containing the server).

Locally, clients are serviced in a FIFO manner (with respect to other clients
on the same processor).  Globally, clients are guaranteed service ... but not
necessarilly a `fair' one and not FIFO.

Consider the service guaranteed to a client on a remote processor.  If there
are n clients altogether on that processor and p processors overall, service
will be obtained, at worst, after (n*p) server-cycles from the client making
its CLAIM on the SHARED channel.  Note that, if there are more clients than
average on this remote processor, (n*p) will be larger than the total number
of clients in the system.

Thus, all clients get service guarantees, but clients on processors that are
lightly loaded will get a better guarantee than clients on the heavily loaded
ones.  It's quite possible for a client on the latter to make a CLAIM before
a client on the former ... but for the later client (on the former) to be
serviced before the earlier client (on the latter).  I guess that reflects
real life!

--}}}

--{{{  Postscript

A T9000 transputer network could implement a SHARED channel using a combined
SEMAPHORE and RESOURCE mechanism.

Remote clients could share one virtual link to the server processor using the
SEMAPHORE mechanism described above.

The virtual link at the server end would be mapped on to a RESOURCE channel.
That way, we get back to only having one virtual link per remote client
processor -- rather than one virtual link per remote client as required by
a pure RESOURCE implementation.

Since the T9000 has both SEMAPHORE and RESOURCE instructions, this looks like
the best bet.  The RESOURCE mechanism at the server end would have slightly
lower overheads than the proxy processes and their SEMAPHORE ... but it would
be close.

For KRoC, we are a little reluctant to make the effort of modifying the kernel
to check channels to see if they are marked as RESOURCE ones (which will add
a tiny overhead on normal channel communications) ... given that SEMAPHOREs
can do the job and don't need any changes to the kernel.  Further, to allow
a server process to ALT over SHARED channels, we would have to implement two
extra instructions (`enable.grant' and `disable.grant') and get them to work
on both real and virtual RESOURCE channels.  With SEMAPHOREs, we get ALTing
over SHARED channels (real and virtual) for free.

[Has anyone ever tried the T9000 RESOURCE instructions to see if they work?]

--}}}

Enclosed is a uuencoded gzipped tared set of (9) files for KRoC programs
that demonstrate this distributed SHARED channel.  There is a READ.ME file
giving instructions.  It includes an extra variation that configures some
client processes on the server processor as well as on remote processors.
The `server' code needs minimal adjustment to deal with the local clients
(which come in over a local request/reply channel-pair alongside the array
of virtual ones).  You will need to supply "semaphore.inc" and the standard
"utils" library from the KRoC release.

To decode the enclosed, save this email to a file "foo", say.  Then:

  % uudecode foo
  % gunzip dsc.gz
  % tar xvf dsc

Cheers,

Peter.

(cut here)
---------------------------------------------------------------------------
begin 644 dsc.gz
M'XL("&%]MC(  V1S8P#M7&U3Z[82[F?_BBV=.TU:)R6OW.:TTTDYH3"%0@/G
M=GH9AFL<A;@X=FH[AT//]+_?U8MMR9&=P %Z7J0/A,C2:K5:K9[==30>#5\V
MCT:?/64!@'ZW"Y]!=Z?5[>$G]#I]]LE*N]W>P1;M7J^[T^_N=+"JUVYO?P9/
MRI0IK%AG,P+3T/?#6R^XAJGGDQCFS@V!Y0(2?';EQ)X+$S(/@SB)G,0+ PBG
M[-'$PQKO:IF0"<0S)R(3RYTY04#\@64!Q"1Z3:++110FH1OZ32]PL=;U/1(D
ME[$W(<W0=?-V4HU$^%)0I$\ :FX8)(X7Q.# $OLUYN'$FWK.E4\@87^G8038
M:.I=+SFS=<LZF,)=N(1)"$&8P,QY36 K)G-G,0LC0MG: NRUM4P\/VXF;K@%
M7D![1/#S.-R%X]W3T7"\NP\+)YG9%B5UZ_D^)Y2$P'BX8Q+YXN"7W<-7+T??
M?/'J=(33B(B;>*]1HMAL$7I! DY"&\Z;%A-\?!<G9 ZN$]#: *X(,C]?X"),
M<(QDQN3X+[B)0A<:KD9V^;-5*8IG9<)4B*G=@8LBM*S;&8D(D][<N8. (%\X
ME3^7CD^G+$2& J/,XIKXH>OX3$Y- &F"V-G"N47+0)Z6AC,N%N0DIGIV._/<
M6;:<5(SXT'?B1# /J%LNB6-</8>J1!Q.LSH( XLVYQ.36J+><$),UV5-I9._
M;)7KH'AF.<$$9S0ARG+1RL+\-$N3D2]?'-ZD?'GH\VR!2L1XV;*L/9PK:ARN
M7YS8R!X@56?>R83KQ?C8]9<39-X+"A().Y?Z[4L?L*TNZY)EX>@VQ,[$O[/A
MEE"%_C))98.D<:P[DD"SB7I_0I G^(WX[@R_U5H[5%E>$I?,KTAD0^O;;_MU
MZY\VBY],T:SRHX^QYOR';KO'S_]V9Z?7:0&TMEL]<_X_2[%.QL=GQ[O'AS >
M_?IJ='J&>WQW>#JRZ,)0NQ)<OX ??S\;#0;G%_3S!1S\<I:N'#0::%GP8%C2
MS2MP04Q89VI[KDG$VJN=I,YH>&C_,)J0J,FZD6#21)P1Q([+L,9JP6Y3+_#B
M&3]UJ'6QT&A)$SDY_!T.3N''X^/# >\0D609!6CIKN[2$\&94CM$'#QA^#R_
M$1RC_?MTSO\"R2?1L37[O]/M]E+\W][N][%J!UT"L_^?HZ0:6T3$EOQ@Y8C8
MLJQ&X^W;MRF<IYMO-P5EM?\,#]E^#VR8WS4SY&5;RC8^'1T-3_:/QR.(;>TF
M9[ !][92O[L__ 6.]U)KA1O[SR4#.'DO J?[P_'H95E':ATBLO#OZLIP8F\T
M%HX76;2&S0__HU/QPVL/H6TS\>9D8%'F?V7DY7H8? _;K/:W_8/#$9R-7XVL
M=*Z_6OE0C/"MXR40X+9%>#O))WEV<#0: Y(;9%5T_"3_*M,"VA)^@&2E9KAW
M1NG R>&K4Z@%7[6V6:E+#8N\*]]9OT!B^N^__RY.P?4='(N!; 8)4Q%F[5B#
M9J984(OKU13CQ(F21CAM+'QT-V+O.G!R:ND*4GF(9<?5//O]9'2:?L^EE#;X
M7"Q)Z9#I60-7R]@+J./ 3A6N]%ECJM+\_(,906>9'B];L,_^G4;A/-7]K8&N
M"_>B6!<\T7)7I-":3FSJ$7]"F]9.#_XK!JO#UU#K?174\^;T:*,G9YE:Y-//
MCF%Y>5_ -JP6W 1G?,^A+]5NW*+3)E/$'8.*18_KC0O=5DPPC8(\90YEE*%,
M>S!@GR^$4#@]"1O8C\#>'TOD 4_G/,!!JL48Z&6G4DUF>,YS/XB&';RDJ6=U
M4WDP!4)YL$^) ;9B!.:H3LXUPHPKQ[UY!*$PC<Z=YVJ!R":^5*_0/% @XCH(
M]";W$48!#E;OY8CXQ(E)E4D23=88)?XYL/CGQ^R-%F#CDXP!:_!?O]W)\!_S
M_Q#_]=H&_SU'>3#^8QX.#P 6T2"U!V_N9$S(:Q1DJ#G-;;C&G8X?$C2L*U"L
M *QH7PX2^#F8GX&I ?E!/%>,482V/B$<G8CV&J22MT=22Q8/I\'E%;/(>$9+
MM>JJTH, #[#4X>0H0  %/&-IL!2=LDW(,+"X#%#8*;&9$^.9Y?@)ARL"AE?8
MN)243Q((<1)1#-<D4:=3,'LI*,/'GDOX>HHO4"L%XAJ@G5>R,RU<)NJRBO]V
MCX^.1E0[0G8(-QH,C2Q(A'*?L_@NGHL^42&%# B:J!),M,YBX2/8$2=&HQ$O
MB.M-/1<%04@6ZM2KMH@Z3\B4PD'6.'T>UYN,G!B$8R29 1V<T(.!]*1A0"ZC
M$%S+SH543?%YIO9\%TA/5Y0^B^#0PF0>>W^1'"R>H['ET#16_0R&M/*J%(JP
MWH-8(#'IH%91)]"E;?).4#N/8>]X+#;I7^3"YKWM=/WSPD' Y[+'Q-GQ*KG+
M((BW&6,B2E7S[LE)&0:IGO[65\&6#=N:00KKNC<\9*M5OOMP<V:;CTY MIW"
M9"JES'D^OWBP^RQWE?:U/&3!@99*]>[7&/4,4!;M.K>2/T@^5,:U$ [-4TTC
M0G 9AX=GU-]BHM#"P<R8"3&<9V116=D,E1II(4OXX&%&FK%*HM"G\=#_K9+^
MDAH(85;>W/W#:+,D.OBH8T U_NNU.QG^Z^[T>C3^O]/O&/SW'.6A^(_#/REZ
MG%9)#L66Q<V6I&--$?&N*2;AAMQ=A>@:VA"[N'41'9(H"B/53.1?>*:?'=#4
MA-*MI*3\M>\GL'$YF12%SITW33Z#N(DH(W=C:?REM3VPTB .@ZQI4_KPO&M#
MVX9_V]"SH6-#]V+ ]G^>!TD;+^3<LS1V2BX?E!%FWGXVU""=-L<*N02*??%)
M 6V+B62(6PG$YB,48K(ZHWWOHT)/9+.HJ[S@ %^<C(<_'0W%*"D'^D=(F3T0
M!T;F0 R*+%$,S%Y3R!MEKQKP!^G:,=RO&RTNT$-[OE3##H+5!L<38I*QS&!Z
MHJ$.)Y[C>P6\;D,K/6M.AF/T"KZ';0:G2I4V1TI[TGH$\%V^VLHZI='Z +Z&
M5B%23T/RV7KS59/Z*N"(S>;G@xxx>%Q3!UJMY:9A56N+]0@=*X'*AKI7#5DJ
M 8M<=.!E$S55543HQ15);M' 9<>_AQX&?7-$&VRK*$BQ1MVD08YNJ"G$?^DK
M3@)_^"S?N3%!)]\1-J#5I+G2Q+EA)A913<P<F<VI132]&;BS* Q0N>D4HW!Y
M/4,&HSG=9(7 W#IZ;CB?+P/AU(F076&O,U#67-GQK#,3",K:9>\&"0#&49S,
M26$I!465F+R&XH4GOI(Y3RL 7<.30.B2*J><R".4<E4DIW"BTZF-[(Z8L&I]
MBF'6U4F+AI*9TIQOTE)OD,HY#RX&4@\UCA38*S$CP?FZ!).ZQU*'2M!035[!
M<\OI55LY&77D)BW )0TC&IP_UXAFQ=SY7G#3E))::SI1\R:Z(.,"-J0KM^G"
MK(*'0$(*N!ZVPI9<03VEX(++:M6:R[WD+BG<JQ?ER-VA?QH:?Q)EY=W$)Q@#
MJOV_;AL?"O^OT^K2#-).MV_>_WB68N+_PO:9^+^)_YOXOXG_?PCQ?VU01Q]R
M_]!2!!D ICY&#AJ9P\N=&O&"4EEW!H5%YWQ\ZK:A"9U38QA.<0_6]?W?MPR%
M%,O)"7ZG++PTD8?G,U:".AFIPD(HHGVFA BE;_R_I_'_2G^ ](@8$ZKQ?W^G
MOYWB_]XVS06U6JT=@_^?I3QE_N>R93) JQD@*S]\V9O6B P7ON.2_,>-,87H
M.8$PD$^/ JDU^:2:FE"J0P-: ZE;,5C3]"95W;0&SR2C3#+J8TY&;8"VY6)R
M5VN*R5W1ZH)I,+FKM;FK2A=1_,Y3HZJ*&=6QI-#)5$!A(E78UYZ#JL:<D+J.
ML8+S.<BG^"Z,4:L->L8V-^(?>V9//2UK+"HJ+51QPKIT5PDL.9?K+RHL/T Q
M;"&M8+U$2*KVI:)2=;M46BK'A<$?PS%6!RAQDS<,$7QH+O1[DS_5@'25U#V2
MJD7^]1=]/*Z/"6ON?^AUVOG]#]M]^OYGS_S^YWF*N?]!<_\#LPK4$+.4U*E5
MZG\.+*U7.:#\?!CX7WN?SR/KV)K]W^^WLOA?I]]F][^T3?SO>4H>YM.?!%M5
MB?[-KGU(X2S;2^"J,3USN4(1D.T>#@^.P)5HW>NF@@?=57#?VPKN?5^![L:"
MU?2I>RY,Y<6]+BZHO+> 4T7K>7'/'\)7W%U0X'3S"PS*[R]X%S:K[S H$^O:
MBPS6W6-09/E> BJYT6#-A0;O(J7R2PW*)+3V9H.JBPTVET[YZP4EUQ%4OY?#
MK:QYY>997KF1%O*#>^LFT\_WX<6;,F;>NW=O5#"S;IL5P,I/XR%*,SW;LSWK
MVGE/'1^KZ5.35M6F59\N-:D!L:HC=;_$F)06*TN*5:;$W-(7=QX]_;6)PJNB
M2+>*:\O/WR7@]Y!?32I\T_C8$\;TJG[YL"8@]_%?,62**::\I\7D_W743/[?
MY/]-_C]]:/+_)O]O\O\F___0 B;__]X6D_\W^7^3__]TB\G_F_Q_H9/)_Z=4
I3?[?Y/]-_M_D_TW^W^3_3?[?Y/]-_M_D_TW^WY2/J/P?*JK6^P"   #)
 
end