A
Arthur T. Murray
ATM:DBH:Arthur T. Murray said:Human language, of course. Your system is
very Anglo-centric. I suppose that's because
you probably only know English,
+German +Russian +Latin +Greek
[Saepe circumambulo cogitans Latine; ergo scio
me esse.]
In that case, giving named concepts a numerical index is
an exercise in futility, since there is no 1-1 mapping
between words and concepts in different languages.
Such reasoning about "futility" does not apply here,
where we make no attempt at a one-to-one (1-1) mapping.
Above a few yeastlike "starter" concepts in the English
http://mentifex.virtualentity.com/enboot.html bootstrap,
each AI Mind species dynamically assigns a numeric "nen"
(for English) number on the fly to each learned concept.
The proper inter-mind identifier for any concept is not
the English "nen" number or the German "nde" number or
the French "nfr" number or the Japanese "njp" number but --
(are you sitting down and ready to absorb a great shock?)
the natual-language *word* for the communicand concept!
ATM:For example, if the word "FahrvergnYgen" has an index of
93824 in the geVocab module, what index(es) does that
correspond to in the enVocab module?
http://mentifex.virtualentity.com/standard.html#variable tells
why the German vocabulary module should be called "deVocab"
in accordance with the (International Standards Organization)
"ISO 639:1988 Extract: Codes for names of language" online at
http://palimpsest.stanford.edu/lex/iso639.html (de = deutsch).
Normally it would be cheeky and presumptuous for an AI
parvenu and arriviste like myself to make bold to proclaim
a white paper of "Standards in Artificial Intelligence,"
but people began porting my AI Mind code into various
programming languages several years ago, and I had no
choice but to draw up a "Standards" document just to
make things easier for all concerned -- lest people
start "reinventing the wheel" or diverging too far.
If you don't mind my giving you some more ammunition
for things "Mentifex" to sneer at, then please notice
how craftily the new "AI Standards" white paper at
http://mentifex.virtualentity.com/standard.html now
contains the exact same brain-mind ASCII diagram as
found on page 15 (HCI) and on the cover of the AI4U
http://www.amazon.com/exec/obidos/ASIN/0595654371/ book.
Sometimes I wonder if people, especially students,
who see the Mentifex AI Mind diagrams, simply assume
that the depicted layouts are part of the general
background of neuroscience and cognitive science,
without realizing how original the diagrams are.
There must eventually be some quite authoritative
diagrammatic depictions of how the brain-mind is
organized with respect to information-flow, so
we shall see if the 35 AI4U diagrams were right.
ATM:DBH:[...]
One reaon the AI4U book may prove to be valuable
over time is
http://www.amazon.com/exec/obidos/ASIN/0595654371/
35 diagrams.
Ah yes. But then, you should have called it:
"Intelligence in 35 Diagrams".
The AI textbook AI4U (AI For You) has 34 mind-module diagrams,
one for each of the 34 chapters -- each about a single module.
http://mentifex.virtualentity.com/ai4u_157.html is diagram #35.
Notice how crafty it is in all the XYZ AI Weblogs (such as
http://mentifex.virtualentity.com/cpp.html -- C++ AI Blog) to
link to "ai4u_157.html" as the "framework" on p. 157 of AI4U,
instead of to "mindloop.html" as the diagram was once called,
because now the (actually quite genuine) impression is given
that everything ties in with the wonderful new AI Mind design
that has metaoneirically been revealed to the world in AI4U.
ATM:
ATM:DBH:[...]
As you go on to elaborate below, of course the neuronal
fiber holds a concept by storing information in the
*connections*.
You're the only person I know that calls them "fibers".
Everyone else calls them "neurons" or "units". When you
say "fiber", it makes me think you are talking about the
axon, and don't really understand how neurons work.
Logic dictates that the most important thing about neurons is how
elongated they are. If neurons were all punctiform cells like an
amoeba, there would be no long-distance transmission of signals
and there would be no diachronic persistence of conceptual ideas.
That is to say, the "longness" of neuronal fibers, each tagged
with as many as ten thousand associative tags to other concepts,
allows a concept to be embodied in one or more (redundant) fibers.
http://mentifex.virtualentity.com/theory5.html -- Know Thyself --
the Concept-Fiber Theory of Mind includes speculation that minds
may have evolved when originally dedicated sensory fibers broke
free by genetic saltation from their sensory-only dedication and
stumbled felicitously into a role of holding long-time conceptual
information rather than instantaneous-time sensory information.
ATM:
ATM:DBH:
LOL!!! Well, that's all very poetic, but not very useful for
building an intelligent artifact. I would hardly describe a
separate semantic and syntactic system for natural language
processing a "novel, original contribution". Such a thing is
so obvious that even a rank novice in AI would probably *start*
with such a structure if designing a natural language system.
Yes, but where would the novice get such a system if not from AI4U?
Right here and now I would humbly like to ask paper-writers and
book-authors to cite the AI4U mind-diagrams and germane ideas in
their forthcoming publications so as to spread the thoery of mind,
with adjustments and even refutations where necessary. I give
blanket permission for anyone anywhere to reproduce the diagrams
and to re-fashion them into line-art far prettier than ASCII.
ATM:
ATM:DBH:[...]
We have heard a lot of talk about portions of the
Internet coming together to form one giant "Global
Brain" in distributed processing. Uneasiness ensues
among us "neurotheoreticians." How would you like
it if your own brain-mind were distributed across
several earthly continents, with perhaps one lunar lobe,
and old engrams stored four light years away at Alpha
Centauri?
LOL!!! You cannot possibly be serious!!! If I thought
you were actually intelligent, I would chalk this up to
satire.
Since a brain-mind needs to be local, I am quite serious.
As an aside, please consider the following line of thought.
By the way, this same thing happens to me whenever I try
to read a paragraph or two of Esperanto -- an artificial
language which I do not know myself. As I stare at a
sentence in Esperanto, a very strange thing happens.
[Now remember, I am fluent in Latin, Germanic and Slavic
languages -- the Esperanto guy was Polish -- plus Greek.]
The meanings of the Esperanto words slowly surface in my
polyglot mind as I stare at each succeeding word, and I
feel as if I have known Esperanto automagically for years.
But yet I could never try to read a book in Esperanto --
it would be cranially painful -- like the scientist
father-figure trying to mindmeld with the Krell in
http://us.imdb.com/Title?0049223 -- Forbidden Planet.
Suppose that Reichsfuehrer Ashcroft and his Parteibonzen
were to set up a Total-Information-Awareness AI Mind
that knew everything about every American citizen and
every teenage or older inmate at the Guantanomo K-Z
and the other now emerging American concentration camps,
such as Novo-Sobibor near Baghdad International Airport.
Such an all-seeing, all-knowing Big-Brother-Mind would
only need some sort of RF-ID tag or other chance input
to start thinking about you and your entire life-span:
"As I think about Citizen Held, suddenly I remember...
his fingerprints, uh, wait a minute, his genome... and,
uh, it's in there somewhere... his every Usenet post."
The distributed Held-data are not in the Mind until
they are fetched, but BBN (Big-Brother-Noesis) *feels*
as if it knew the data all along, albeit recalling slowly.
ATM:
ATM:DBH:[...]
http://mind.sourceforge.net/conscius.html is my summary.
The fact that you sum up consciousness in 4 weak,
meaningless paragraphs is surpassed only by the fact
that you can't spell "conscius" [sic].
I shorten all Wintel filenames to be eight characters or fewer.
The name "conscius.html" happens to be perfectly good Latin.
DBH:[...]
Let's do something new for a change. Let's code AI in
many different programming languages and let's let evolve
not only the AI Minds but cummunities of AI adepts to
tend to the AI Minds and nurture them into full maturity.
I intend to do just that, but not using the Mentifex model,
Why not? You could make whatever changes you saw as needed,
and then your resulting AI would be a brand new Mind species.
and not out in the open. But until such time as I have a
working prototype, I'm not going to ramble on about
speculations and mindless musings. [...]
Dave
Arthur