Date: Fri, 23 Jun 2000 09:05:07 -0700
From: | Jack Park |
jackpark@thinkalong.com |
To: | unrev-II@egroups.com |
Subject: | Augment + categories = OHS v0.1 |
Painfully longish, sorry.
[Commenting on a recent letter from Gil Regev following the meeting he attended at SRI on June 15, 2000, who states in part...]
In Knoware, relationships have no meaning for the software but they do have meaning to the users. The search tool searches for text in relationships as well as in concepts.
Gil has a good point here. In fact, George Lakoff wrote a whole book on this (_Women, Fire, and Dangerous Things_). Gil's point, combined with the meeting yesterday, lead me to ponder a couple of issues, which I shall do outloud, even as I type...
My take on the meeting yesterday was this: a lot of back-and-forth without a clearly defined ontology on which the banter was founded. Ultimately, even the definition of "document" was up for grabs, not to mention "node."
I believe that we waste an enormous amount of human intellectual energy doing battle while not even on the same page. If that sounds like a criticism of one aspect of yesterday's meeting, it is meant to be so. OTOH, the meeting was, indeed, valuable largely since Eugene did a masterful job of summarizing the Use Case issue and presenting it -- something that needed (and continues to be needed) to be done.
I respectuflly submit that all discussions be preceeded by the development of a concensus ontology. (side note) achievement of a concensus ontology should be a goal of this list. Gil points out what Lakoff and others have been saying: once you get to the ontological level of "category", concensus begins to fall apart. Gil uses the term "rigidify." That works for me, but there are other points of view as well. At issue is the fact that we all categorize the world in our own way. Production-line education tends to enforce standardization in that arena, but we are still individuals with our own non-linearities and so forth.
So, just what IS a mother to do?
An OHS/DKR is, at root, a vision of a universal tool for collaborative evolutionary epistemology (that's my take on it, your mileage may vary). To be universal, the implication is that everybody has the chance to contribute (both give and take) with the "appearance" of being on the same page as everyone else.
As it turns out, Adam, I, Howard, and Peter Yim all work for a company that is working to render this very capability in the B2B space. VerticalNet uses a carefully crafted ontology (on which Howard works) to serve as an "interlingua" or, shall I say, "page renderer", so that enterprises that have their own individual ontology can be mapped onto the playing field.
When Mary Keeler and I spoke at one of the meetings recently, we sketched on the board a 3-layered architecture, all of which comprised the DKR and its gateway to the OHS (which I define here as a desktop, palmtop, whatever, window into the DKR).
Let me now sketch (in words) that 3 layer architecture and try to show how it has the opportunity to do precisely what Doug asks for, and allows us to build an ontology that serves as an interlingua to all possible users no matter what they make of women, fire, and/or dangerous things.
Peirce's theory of categories has it that there are, fundamentally, three categories:
This layer is nothing more or less than a database (archive) of human
discourse, recorded experience.
"Actualities" resides in the middle. This layer serves as a lens, mapping the
possibilities into structures (an ontology) that can be viewed, inferenced,
debated, and so forth. This layer, IMHO, is the crucial one. To get it right,
it must consist of a kind of structure that, at once, serves as a universal
ontology (tongue ensconsed firmly in cheek on that one), a platform for
reasoning and debate, and a permanent record of the evolving human knowledge
base. Whoever builds this layer wins.
Probabilities is the top layer in the architecture.
It, in fact, is the gateway to the users "out there." Users will have their
own mapping tools, perhaps what Doug calls the transcoder. Transcoding can, of
course, be accomplished anywhere in the world; at the server (good for
wireless), somewhere else in the network, or at the user's client computer. The
purpose of transcoding is to allow the user to get or otherwise construct a
view that suits his/her tastes/needs/desires. The user should have the ability
to directly query actualities, and, through that layer, ask a question like
"where did you get that?" and have read-only access directly to the
possibilities layer. This capability suggests that each "node" (don't go
there, we shall define it eventually) contains pointers into the "document(s)
("Hey!, I said don't go there") from which it (the node) was derived. "side
note" I believe that transcoding now takes on a larger role; originally it was
conceived as a view generation tool. Now, I suspect it also takes the role of
ontological mapping
How is this architecture used?
Here's a sketch of the appropriate scenario that traces document origination,
actualities generation, and user experience.
It doesn't really stop there. Let's pretend user takes exception with
something discovered in actualities.
But, given that this capability remains the great "anal sphincter" in our
project, the entire architecture I have sketched cannot, by definition, be our
Version 1.0.
So, we must re-sketch it as something we can do today.
Largely, the overall architecture remains the same.
We simply do not set out to construct the universal ontology as a middle layer.
Rather, we scale it back to some kind of human-generated (perhaps with machine
augmentation as that evolves) middle layer, one that represents a concensus
ontology for today, but one that is mutible as the concensus evolves
(conceptual drift). By constructing the software as a pluggable architecture,
we simply plug in software modules as they emerge to enhance the system. (side
note) I have a hunch that some activity of the UN, say, the UN/SPSC, will
ultimately become the basis for the "universal mapping engine" Which brings me
back (yes, Martha, non-linear types can find their way back) to the original
space on which this diatribe is based.
The fundamental architecture being espoused within the meeting was that of an
engine that mutates original documents by adding links to them.
The fundamental approach taken in the architecture I present here is one in
which absolutely no modifications are ever performed on original documents.
All linkages are formed "above" the permanent record of human discourse and
experience. I strongly believe that the extra effort required to avoid building
a system that simply plays with original documents will prove to be of enormous
value in the larger picture.
Thus ends the diatribe. The non-linear one is now leaving the building. While
leaving, he wishes to acknowledge that the architecture sketched here has been
strongly influenced by Doug (for the big picture), Mary Keeler (for the
Peircian vision), Kathleen Fisher (for the knowledge mapping structures, along
with John Sowa and others), Eric (for his introduction to IBIS), and Rod (for
his web site that tries to keep all this together).
Sincerely,
From that, we can see that user does NOT have write access to actuality. Only
the system does -- and that, of course, is the big issue here. Hesse's Glass
Bead Game suggested that there is a Bead Master, one individual that has the
ability to do such mappings and control the flow of the epistemological
evolution within the system. I tend to think that will not happen, at least in
my lifetime. There needs to be a "machine" that does this work, and there is
an enormous body of scholarly work being generated that hints of the emergence
of this capability.
Jack Park
jackpark@thinkalong.com