[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]

Addressing in hardware and software



I was discussing my RSVP CPU design with a cow-orker (for details on RSVP
see http://www.slip.net/~scholr/tech/RSVP.html) when a complementary idea
stuck me: 'flying segments'. I'd like to get comments on the idea.

(To those on the Xanadu list: This may seem to have little relevance to
Xanadu, but please bear with me. It will be clear what I'm getting at
towards the end - I hope - and I have some questions that are directly
related to the project, or at least its history. Also, if some of you get
two copies of this, I apologize; I wanted to make sure this got to certain
people no matter what, so you should feel honored that I chose you... )

Now, given the design of the  RSVP processors, it is natural to use a
segmented memory system for  memory management: rather than trying to
access the whole of a  memory space, a cache can be set to work from a
given segment, thus preventing or at least minimizing the possibility of
cache overlap. 

But it can also use this kind of segmentation in a bigger way: to unite
several disjoint memory spaces into a single logical segmented memory. If
you use a large (32 bit) segment size and an even larger (64-bit) segment
offset, you would have an effective address space larger than the number of
particles in the known universe; this can be used in several manners going
beyond the traditional concept of memory management.
 
First, it means that the memory system, already decoupled from the CPUs by
the cache indirection, can be be spread out over two or more memory banks,
or even mapped to secondary storage such as disks; this, with appropriate
disk controller firmware, makes virtual memory and persistence a simple
matter of remapping cache segments.

It can make memory mapped I/O of all kinds simpler, in fact. By  treating
each device as a memory segment, you get a very simple model for device
access and protection. 

But the real power of this idea comes in when you extend it beyond a single
workstation in the conventional sense. By assigning every processor and
every device a fixed segment or set of segment offsets, you now can treat
all of the devices with such offsets as a single address space, regardless
of distance or the nature of the connection between the devices. In effect,
you are creating a single memory space for the entire network of computers,
in which every device
and processor can access (within its protection rights) any other device or
processor it can communicate with. This transparency eliminates the need
for software communication and data transfer protocols, as well as the
dichotomy between client and server. In effect, the entire 'network'
transparently becomes SMP cluster.

This of course has all the inherent disadvantages of hardware vs. software,
but with suitable gateways, this can operate at least to some degree over
existing systems. A mixed hardware and software approach means that you
could in effect have a 'hardware memory management' being carried over the
software TCP/IP networking, invisible to the individual nodes (the gateway
would wrap the hardware layer memory access in an IP packet, send it over
the conventional network, and then decode it into a 'hardware layer'
message on the other side... wierd, but I think it could be done somehow,
if you could figure out the inherent address problems you get when you
invert protocol layers). 

Is this at all reasonable, or am I just woolgathering?

(For those of you who aren't familiar with it, the rest is related to the
Xanadu Project. You can safely ignore it if you aren't interested).

As I was writing this, however, I had an even wierder idea: could tumbler
addressing be implemented, at least partially, in hardware (assuming
software management of type tagging as needed)? If so, could it be used to
replace conventional linear addressing entirely, as part of a processor or
class of devices? The advantages of tagged memory and content addressable
memory have been demonestrated (even if the cost was too high for them in
the end); tumblers would combine this with an infinte 'address space',
networking in the same manner as flying segments, and fine-grained control
over the object/chunk/device spaces being accessed (micro-segmentation, as
it were). 

On a related note, I could use a clearer explanation of both U.Green and
U.Gold, especially the Ent structure in the latter. Smalltalk is a language
I am only passingly familiar with (much to my regret, as it clearly would
be an interesting system to work in), and the composite Smalltalk/C++/X++
code is extremely difficult to follow (I suspect that even if it were
completed, it would have quickly become unmaintainable, but that's niether
here nor there). It seems to me that critical parts of the U.Gold code are
either missing or too obscure to recognize. I suppose this may be due to
the ParcPlace license issues; but since the Udanax page claims that all of
the Ent and enfilade structures are in the available code, I can only
assume that I am looking in the wrong places. 

So my question is: are any *complete* descriptions of the enfilade and Ent
structures available? If so, where can I find them? Or is the fact that I
am unable to divine it from the material on the http://udanax.com/ and
http://xanadu.net/ web pages just denseness on my part? Even if it is of no
relevance to the Zigzag design (which I doubt), I would like to know these
to get a better understanding of the Project overall, to see what has been
done already and may be done in the future.


Schol-R-LEA;2 ELF JAM LCF BiWM MGT GS (http://www.slip.net/~scholr/)
First Speaker, Last Eristic Church of Finagle and Holy Bisexuality
"If The Computer is a Universal Control System, then let's give kids
Universes to control". - Ted Nelson