Ingo Swann (01Mar97)
If we think in terms of PERCEPTION, then we are most likely to think in terms of THINGS -- because things are what we perceive and have mental-image pictures of stored in our memory library. The incoming signals through the eye are processed as signals through a number of systems before they end up as thing-images.
It is relatively certain that our "understanding" processes undergo something quite similar, if not identical.
When we think in terms of THINGS, then
we think in terms of objects, their shapes, sizes, colors, their
meaning as an IT. We also think in terms of the distances between
objects, their placement with regard to each other.
If we think of subjects or topics, we do so by first converting them into an IT-THING: for example, consider biology. IT is a science, as most know whether they know anyTHING more about IT.
The most fundamental basis of most consensus
realities consists of IT-THINGS, and the most essential nomenclature
utilized is set up to identify it-things. And this is the case
even regarding philosophical abstractions, which, too, are it-things
-- e.g., IT is an abstraction whatever IT is.
The general purpose of the first organized
psychical research organizations set up during the 1880s was to
witness, inspect, identify, separate and categorize what later
came to be called "paranormal" phenomena.
But in order to proceed, the phenomena first had to be given identifiers, and which turned the phenomena into IT-THINGS. "IT is clairvoyance," for example. "IT is levitation," "IT is mediumship," "IT is thought-transference" (a term-concept later replaced by "IT is telepathy"), and finally "IT is psychic" whatever it is.
Phenomena are not just phenomena, but
different kinds of them, and which need to be differentiated,
distinguished and identified one from another. But sometimes this
differentiation doesn't work very well if one doesn't really understand
what IT is in the first place.
For example, in spite of about 100 years to do so, exceedingly great confusions continue to persist in making differentiation between clairvoyance and telepathy.
But generally speaking, differentiation
is achieved by making an IT out of different kinds of phenomena
and then assigning a nomenclature bit (or byte) in order to talk
or write about any of them. When this is accomplished, we can
thenceforth "know" what is being referred to because
it has been rendered into an IT-IS kind of THING.
The first essential goal of organized
parapsychology (circa the 1930s) was not only to inspect ESP phenomena,
but to do so only within the parameters of recognized and approved
Extra-sensory perception (ESP) was an it-identifier of "perceptions" that could not be attributed to any of the five physical senses, and so it could be said those perceptions were external to or outside the physical senses.
To test for the presence of ESP in given
individuals or subjects, "targets" were utilized, and
there came into existence standardized forms of targets (among
them the famous Zener cards) which mostly consisted of pictures
of geometric shapes or colors. A "target" is always
The goal of the testing was to determine if the subjects could perceive the "targets" via senses other than the physical five.
The targets, of course, were IT-THINGS - expressed as "It is a circle," "It is a square," or "IT (the target) is the wavy lines."
Now, in the "universe" of IT-THINK,
there is only one basic way to judge "success" - whether
one perceives-sees IT or doesn't see IT.
Thus, the parapsychology ESP subjects either "got the target" or didn't get it." Or, "hit" the target, or "missed" it.
As we shall see in later essays, the
"hit-miss" paradigm that arose in parapsychology led
to some rather dreadful situations regarding comprehension, morale
But nonetheless it was a perfectly logical approach within the contexts of IT-THING-THINK, and which contexts are universal everywhere and in all cultures.
The concepts of PERCEPTION are intimately
and permanently linked to IT-THINGS, because if you examine any
of them very carefully one can only perceive an IT. And even then,
as has been reviewed in Part 3, the IT-PERCEPTION is a mental-image
reconstruction, the sum of which is of the perceiver, and not
exactly of the IT itself.
It is worth the time to review a few
of the numerous definitions of THING:
a separate and distinct individual quality, fact, idea, concept or entity;
a material or substance of a given kind;
a piece of information or news;
an event, deed, act or circumstance;
a state of affairs in general, or within a specific or implied sphere.
The five definitions of THING given above
can and do account for almost, but not quite, everything - and
which is why we refer to everything AS every-thing. And so our
perceptions are geared to perceive, identify, and discriminate
among THINGS - and which then emerge in conscious awareness as
There is absolutely nothing wrong with
basic IT-THINK, and indeed it permits survival on about a 90 per
cent basis - except when there are holes or gaps in it.
But IT-IS gaps can be somewhat corrected
within the contexts of consensus realities in that IT-IS perception
that is consistent with consensus reality is considered proper
or successful perception, while perception that is not is considered
improper or aberrant-undesirable - or at least non-conforming.
In general, however, any gap-difficulties
along these lines are sort of smoothed over in that the nomenclature
of a given consensus reality is the concepto-nomenclature everyone
within it speaks and writes with - and tends to think with, too.
Just outside the enormous, collective
IT-THINK syndromes of our species is a slightly different THINK
This "level" of thinking has to do with RELATIONSHIPS between and among IT-THINGS.
Identifying it-things, and identifying them as it-things, only goes so far, although that process is entirely serviceable to a certain degree.
One can identify it-things, endlessly
so, but only because they become perceptually concrete in some
form - even an idea takes on a sort of concrete-ness if it becomes
shared and approved of.
Relationships among it-things, however,
are usually of a far different matter because, in the first instance,
they have to be deduced. For example, the relationship between
hydrogen atoms and hydrogen bombs is not readily apparent, and
thus had to be deduced before it became identifiable.
This is to point up that although the
arrangement of IT-THINK to IT-THINGS is usually on a one-to-one
basis, the arrangement of IT-THINK to relationships among and
between IT-THINGS is not on any kind of one-to-one basis - excepting
the most gross and familiar samples of it.
The reason for this difficulty is that relationships between it-things can be many and varied and include anything from the imaginable to the unimaginable, from the boring to the fantastic.
Another difficulty arises because once
IT-THINK becomes properly installed it tends to run on automatic
with the mind-boggling speed encountered in Part 3 regarding the
basic ten-step processes of perception.
DEDUCTIVE-THINK regarding relationships,
however, usually never runs on automatic unless the deductions
have themselves been pre-reduced to common understanding, at which
time those particular deductions have taken on the clothing of
Relationships of it-things to one another
can be explicit or implicit, with the explicit ones being easier
to identify, this type of thing usually being referred to as logic.
Implicit relationships, however, are
identified as such because there is very little in the way of
objective or explicit cues involved.
Thus, the deducing (detecting) of implicit relationships can escape the deductive processes of almost everyone - with the exception of those who somehow chance to "notice" them.
And those who DO notice them are quite likely to be attributed as intuitives. And, indeed, if it were up to me, I'd itemize the deduction of implicits as the basic and most broadly-shared type of intuition's many other types. And here is a basic clue regarding "enhancing" one's intuition - by first enhancing one's deductive processes regarding implicit relationships.
As it is in our present consensus reality, we reinforce the processes regarding explicit relationships, but pay very little attention to strengthening the much more wide parameters of implicit relationships.
One of the more recent definitions of
"genius" is that a genius is one "who sees what
others cannot." Although this clearly involves a lot of factors,
the deducing of implicit relationships probably is fundamental
here - since most rely on explicit rather than on implicit deducing.
Now to move speedily on.
The relationship, for example, between ESP and perception seems explicit enough, and therefore seems logical -- especially when a long line of "psychics" say "I perceive" thus and so.
They are correct in saying that they do perceive. But what they perceive is in fact whatever has been processed through their perception-making systems, the sum of these processes being the perception.
And as we have seen these end products are not at all one-to-one images. And so what they report "seeing" may or may not correspond with the actual facts or conditions of what they have "seen" as perceptions.
This is a situation that has not gone
unrecognized in parapsychology.
In testing for ESP, researchers encounter many more "misses" than "hits" and the frequency of the misses has condensed into the theory of "Psi-missing." It is thought that Psi-missing is somehow related to "avoidance" of the "target," and as such constitutes some kind of unidentified psychological factors.
You see, "paraPSYCHOLOGY" is,
after all, majorly conceived of as a branch of psychology -- not
as a branch of perception study. And when it was understood by
the rest of science that "perceptions" mostly consisted
of "cognitive" versus physiological factors, perception,
too, began to be thought of as predominantly having a psychological
In any event, ESP and perception of IT
targets are thought to go hand-in-hand, and all explicit and implicit
considerations along these lines are shared not only in parapsychology,
but throughout science, philosophy, and in our present general
consensus realities as well.
Furthermore, the web of Psi-Perceptions
is linked throughout by the IT-making nomenclature commonly utilized.
If, then, one refers to Psi or ESP, it is automatically understood everywhere that you are referring to special formats of perception that have been assigned IT nomenclature: psychic, clairvoyance, telepathy, intuition, and etc.
It is even commonly understood that "special" refers NOT to perception per se, but to the unusual other-than-sensory ways it is achieved -- if and when it is achieved.
Well, this "prevailing paradigm,"
as it should properly be termed, has actually prevailed for about
100 years, and has been unsuccessfully approached and tested in
the light of every angle conceivable.
The only thing that has been achieved is to document beyond any shadow of doubt that ESP processes do exist, but whose presence by parapsychological methodologies are found at only very low statistical levels (which will be discussed in a later essay).
So, "psychic" perceptions have
been tested for from every angle possible -- which is to say,
very angle consistent with the prevailing consensus reality hypotheses
that ESP and Perception are interrelated both explicitly and implicitly,
so much so that you can't have the one without the other.
But what if this consensus reality concept
isn't complete enough? In other words, what if it has a "gaping
hole" in its interconnecting line-up of conceptualizing --
one of those invisible gaping holes that are not at all obvious
because the apparent picture seems complete and logical enough?
And what if what is needed to fill this
hole has been around for about fifty or more years, but has been
excluded because the prevailing concepts are considered sufficient
unto themselves? And because if the needed factors were to be
included, the entire consensus making nomenclature appropriate
to Psi-Perceptions would either explode or be useless and vacated.
This would mean that everyone has cloned
the wrong stuff, so to speak, and what they have cloned in this
regard has been acting as mental information processing viruses
Ye gads! This would imply a radical reality shift - one which, in its first instance, would big-time EMBARRASS those possessed of the cloned viruses - not only in parapsychology, but in science and philosophy as well, to say nothing of the consensus realities involved.
The essential definitions of the verb TO INFORM, and the noun INFORMATION, never have been ambiguous, but quite precise and clear.
INFORM is said to have been derived from
the Latin verb INFORMARE from IN + FORMA.
However, the Latin FORMA was a noun, and even though the preposition of IN is added to it, it still remains a noun. And nouns, of course, refer to and are meant to identify it-things, not activities which verbs indicate.
FORMA referred to the shape and structure
of something as distinguished from its material or constituent
The preposition IN refers to inclusion of some kind, most usually a spatial inclusion, but also inclusion in something that does not have spatial-material form such as belief, faith, opinion or assumption (i.e., in the faith, only in belief, in his or her opinion or assumption, etc., and of course, IN his or her conception or misconception.)
The key concept of FORMA refers to shape
and structure, and so INFORM refers to what has structural shape,
has taken on structural shape, or been put into structural shape.
So, technically speaking INFORM remains a noun with regard to whatever form a form is in, becoming a verb only when referring to an activity which puts something into shape-structure.
In English, however, IN + FORM as referring
to structural shape has been used only rarely, this meaning having
early been replaced with the concept of MESSAGES - meaning that
messages convey information, and that information is used to convey
If the above seems mildly confusing, it's because it is. So don't worry too much at this point.
You see, on the receiver's part, the actual message is what one deduces from the words (or "signals") which the sender believed represented the message he or she was trying to send. This "process" takes a good deal of "encoding" on the sender's part and a good deal of "decoding" on the receiver's part. But I digress.
Additionally, when we think of something
formed we tend to think in terms of FORM only, not that something
has PUT whatever it is INTO form or format.
I now caution each who chances to read the above to slow down, focus a little, and notice two important factors:
In English, then, the concept of "into
form" has been dropped or vacated, and so we tend not to
think in terms of how and why something has come into whatever
form it has.
But this is somewhat typical of English nomenclature, which tends to IT-identify end products as things in themselves, not as the result of processes - which is to say, formative processes that have to be structural in order to arrive at any given in-formed state.
This is best perceived not via words, but by a diagram. I'll provide one in the context of a more refined essay further on. But anyone can make one for themselves by diagramming how an IT does take on form.
To help in enhancing clarity here, when
we think of those superpower faculties that result in some kind
of clairvoyance, we tend to think the images the clairvoyant "sees"
ARE the clairvoyance.
I.e., he or she "sees" things that others don't, and by means other's don't have active - hence the clairvoyant angle. We mistake WHAT the clairvoyant sees as the clairvoyance, and fail to notice that the informative processes which permit the seeing are the real clairvoyance.
In other words, into-form-making PROCESSES
always precede the resulting images.
Thus, if clairvoyance is possible, the IN + FORM clairvoyance-making processes pre-exist what they yield - for what they yield is what the clairvoyant sees. If the processes are not active, then the clairvoyant will not see anything.
If we compare this to perception-making processes, we know that the perceptions are the sum result of whatever they have been processed through. The superpower faculties apparently "work" in the same exact way.
It is interesting, and important, to
trace the ENGLISH etymologies of INFORM and INFORMATION. The OXFORD
DICTIONARY OF THE ENGLISH LANGUAGE summarizes when and under what
conditions English nomenclature can be noted as first in use.
With regard to INFORM, the OXFORD identifies
the primary ancient Latin sense of INFORMARE (to give form to,
shape or fashion), but notes: "The primary sense had undergone
various developments in ancient and medieval Latin, and in French,
before the word appeared in English."
This is a clever way of saying that when INFORM came into English usage it did not mean putting into a form.)
This appearance in English seems to have
taken place during the 1300s, but seems more than anything else
to have referred to "formative principle, or formative character."
Used in this sense, the first ENGLISH
uses of INFORM were probably drawn from French rather than directly
It is certain that the word INFORMATION is drawn from French, not directly from Latin. Its first usage's in English, again during the 1300s, are exactly those of the French:
"The action of informing [specifically as] forming or molding of the mind or character, training, teaching or instructing; communicating of instructive knowledge."
In this sense, then, from French into
English, INFORMATION referred to mind-shaping, out of which would
emerge "character" - such having been a particularly
French preoccupation ever since.
After this shift in usage-meaning, in
English INFORMATION then appears to have separated into two components,
both utilizing the same nomenclature term, INFORMATION.
The first component remained the same,
almost up until the 1930s when it began to be identified as "mind-programming."
The second component had to do with providing
evidence, either for or against someone, and usually the latter
regarding criminal court cases, heresy examinations and trials.
It would appear that "evidence" found acceptable or logical in the light of certain consensus realities was accepted as "information" - while "evidence" found unacceptable was rejected as something else.
INFORMATION was still being thought of
in exactly this way among the world's intelligence agencies and
systems when I chanced to fall into the government-sponsored "Psi-spy"
research project at Stanford Research Institute in 1972.
Also, during that same epoch, the then hopeful and exceedingly well-funded realm of "scientific" futurology (now generally defunct) also had adapted to this same concept of information, and was being tortured by it - which is to say, adapted to the concept that information consists only of whatever is found acceptable, or logical within a given consensus reality.
"Consensus reality," however,
was considered by futurologists to consist of the majority opinion
of "informed specialists" and/or their vote. Since majority
opinions can be wrong at least as often as right, one does wonder
how futurology every got off the ground. However, one doesn't
need to wonder why it "failed."
During the 1600s, and specifically as
the result of certain Renaissance activities, a new concept-context
regarding INFORMATION was added into this or that drift of meanings.
The earliest noted uses of this meaning occurred about 1649, and we find the gist of this meaning more or less unchanged in WEBSTER'S of 1828, the original edition of the first American dictionary of the English language.
In that dictionary this meaning is given
as the FIRST meaning of INFORM. And I quote:
"INFORM, verb transitive: - Properly, to give form or shape to, but in this sense NOT USED. [Emphasis added.]
"1. To animate; to give life to; to activate by vital powers.
"2. To instruct; to tell to; to acquaint; to communicate knowledge to; to make known to by word or writing."
"INFORM, verb intransitive: - To give intelligence, as in: `He might either teach in the same manner, or inform how he had been taught.' And: "To inform against, to communicate facts by way of accusation."
"1. Intelligence via notice, news or advice communicated by word or writing.
"2. Knowledge derived from reading or instruction.
"3. Knowledge derived from the senses or from the operation of the intellectual faculties.
"4. Communication of facts for the purpose of accusation."
As of 1828, then, long gone is the concept
of IN + FORMA, as is indicated by WEBSTER'S 1828 itself - and
not reactivated until the advent of Information Theory, as will
be discussed ahead (save to mention here that information theory
cannot survive without that concept.)
In WEBSTER'S 1828, the first definition
of INFORM - to animate; to give life to; actuate [i.e., activate]
by vital powers - reflects the central hypothesis of VITALISM,
which we have already encountered.
However, the term VITAL-ISM apparently had not evolved as of 1828, since it is not given in that same dictionary. (The concept of an ism itself seems to have surfaced only in about the 1780s.)
However, a brief review of this topic
is important - because there are significant links between essential
vitalism, information, and activation of the superpower faculties.
(An individual essay regarding vitalism will be provided within
this series of essays.)
You see, IF information (intelligence)
is accurate enough, it is broadly accepted that it can activate
or vitalize activity, and which would be akin to animating or
On the other hand, if information (intelligence) is cluttered with information viruses, one would not normally expect activation. Rather, one would anticipate de-activation, or devitalization - and which, if it could happen, would result in all sorts of de-evolutionary stuff.
VITALISM was crushed and beat into non-existence
about 1920, at which time the consensus realities of philosophical
materialism acquired the contexts of science proper and thenceforth
prevailed. And any science based in philosophical materialism
simply has to be an IT-MAKING science.
Prior to that, philosophical vitalism
(technically in existence roughly since about 1533 during the
Renaissance) and philosophical materialism (technically in existence
since about 1845) had been seen as sister sciences.
The advocates of the two philosophical orientations were soon antagonistic to each other. An enormous conflict, now quite forgotten, ensued, lasted for about eighty years, with the materialists being the ultimate victors. Vitalism was snuffed in academia, and references to it were deleted from consensus reality sources which then prevailed as logical and rational.
In spite of all the philosophical imbroglios
that are brought forth to explain the victory, the actual reason
is quite simple.
By 1920, the material sciences had demonstrated they could produce products of enormous, even fabulous economic value. The vitalism sciences did not produce much of economic meaning. Funding therefore went to the material sciences. End of that story.
There were two essential definitions
regarding vitalistic principles, to which a number of other concepts
were derived. Be sure that I am not digressing or drifting here.
1. That the functions of a living organism are due to a vital principle distinct from physical-chemical forces;
2. That the processes of life are not explicable by the laws of physics and chemistry alone - and that life is in some part self-determining and self-informing.
Please read self-informing as IN + FORM, meaning self-making into form.
For conceptual clarity, any use of the
term VITAL within vitalism's contexts should immediately be replaced
with ANIMATING - at least to discriminate between animate and
In the end, all of the nomenclature that
might be associable to vitalism and/or its two essential concepts
was stringently, and with something akin to a vengeance, expunged
from modernist consensus reality-making literature. Any even glancing
reference to those terms was enough to occasion loss of professional
standing, potential funding, and etc.
Thus, cutting-edge scientists have to walk gingerly, and talk around such concepts if and when they chance to encounter any possibility of their real existence.
In any event, this brief review of the
etymological history of INFORM and INFORMATION indicates that
only one concept of them prevails, the concept that information
is what one reads and learns from.
We can note, too, that two important concepts have more or less fallen into disuse and oblivion: IN + FORMA, and INFORM as it relates to animating principles.
And it is in this consensus reality condition that information theory arose.
So, what IS information theory?
And why might it be of fundamental importance with regard to activating (vitalizing) the superpower faculties?
Most sources dealing with information
theory are somewhat or completely inaccessible (unintelligible)
to those who haven't developed the mental information processing
grids or nomenclature to deal with it.
However, THE NEW COLUMBIA ENCYCLOPEDIA (1975) has a rather neat rendering, at least as regards the early developmental hypotheses.
The theory is indicated as a mathematical
one, principally formulated as of 1948 by the American scientist
Claude E. Shannon, to explain aspects and problem of information
and communication ("communication" later being thought
of as information-transfer, especially in the psychoenergetic
research of the former USSR.)
The entry in the encyclopedia is worth
quoting in its entirety, and I'll do this first.
I caution you not to get confused if you don't understand parts or all of it.
After quoting it, I'll lift out the signal, easy to conceptualize, part and clarify it with respect to opening new cognitive channels toward activating the superpowers.
I never recommend anything, but sometimes
I "suggest." If you have any desire at all to approach
an activation of any of the superpowers, I suggest you pay serious
attention to the quoted materials below, even to the point of
memorizing them (i.e., installing them quite firmly in your memory
One preliminary note, though. Shannon
et. al. seized upon the term ENTROPY and included it in the discursive
part of the theory. This is a term properly belonging to thermodynamics,
and has otherwise since been defined in a number of different
ways. In information theory it means "noise," and so
I'll replace "entropy" with noise, indicating that I
"In this theory, the term INFORMATION is used in a special sense; it a measure of the freedom of choice with which a message is selected from the set of all possible messages.
"Information is thus distinct from meaning, since it is entirely possible for a string of nonsense words and a meaningful sentence to be equivalent with respect to information content.
"Numerically, information is measured [via the theory] in BITS (short for binary digit; see Binary System.)
"One bit is equivalent to the choice between two equally likely choices. For example, if we know that a coin is to be tossed but are unable to see it as it falls, a message telling whether the coin came up heads or tails gives us one bit of information.
"When there are several equally likely choices, the number of bits is equal to the logarithm of the number of choices taken to the base two. For example, if a message specifies one of sixteen equally likely choices, it is said to contain four bits of information.
"When the various choices are not equally possible, the situation is more complex.
"Interestingly, the mathematical expression for information content closely resembles the expression for ENTROPY in thermodynamics. The greater the information in a message, the lower its randomness, or `noisiness,' and hence the smaller its entropy [i.e., the smaller its noise content.]
"Often, because of constraints such as grammar [language, and the way it is expressed], a source does not use its full range of choice. A source that uses just 70% of its freedom of choice would be said to have a relative noise ratio [entropy] of 0.7. The redundancy of such a source is defined as 100% minus the relative entropy, or, in this case, 30% [meaning 30% message-signal adulterated by 70% noise].
"The redundancy of English is about 50%; i.e., about half of the elements used in writing or speaking are freely chosen, and the rest are required by the structure of the language.
"A message proceeds along some channel from the source to the receiver. Information theory defines for any given channel a limiting capacity or rate at which it can carry information, expressed in bits per second.
"In general, it is necessary to process, or encode, information from a source before transmitting it through a given channel.
"For example, a human voice must be encoded before it can be transmitted by radio.
"An important theorem of information theory states that if a source with a given entropy feeds information to a channel with a given capacity, and if the noise in the source is less than the channel capacity, a code exists for which the frequency of errors may be reduced as low as desired.
"If the channel capacity is less than the noise source, no such code exists.
"The theory further shows that noise, or random disturbance of the channel, creates uncertainty as to the correspondence between the received signal and the signal transmitted.
"The average uncertainty in the message when the signal is known is called the equivocation.
"It is shown that the net effect of noise is to reduce the information capacity of the channel. However, redundancy in a message, as distinguished from redundancy in a source, makes it more likely that the message can be reconstituted at the receiver without error.
"For example, if something is already known as a certainty, then all messages about it give no information and are 100% redundant, and the information is thus immune to any disturbances of the channel.
"Using various mathematical means, Shannon was able to define channel capacity for continuous signals, such a music and speech.
"While the theory is not specific in all respects, it proves the existence of optimum coding schemes without showing how to find them. For example, it succeeds remarkably in outlining the engineering requirements of communication systems and the limitations of such systems." SEE C. E. Shannon and Warren Weaver, THE MATHEMATICAL THEORY OF COMMUNICATION (1949).
When we begin to think of what information IS, most of us probably will think it is what we hear or read in some kind of printed or visual format. We think this because this concept "dwells" in consensus realities as such, and we have cloned it quite nicely. And from any number of aspects that concept is serviceable - as far as it goes.
But. By the time "information" reaches a spoken, printed or visual format, it is an end-product of the processes which have organized and produced it in those formats.
Nonetheless, this end-product can act as a "source" of information and we can more or less duplicate it in our own heads.
"Duplicate," of course, means
reproduce or copy it into our own heads, the ostensible goal being
to understand it. In this sense, then, the information we in-put
into our heads has been CONVEYED by the spoken, printed or visual
After the in-put, however, the "conveyance" of the information continues getting into our heads by being filtered through the mental information processing grids of the recipient. The grids are extensions of the memory library earlier referred to.
In THIS processes, the "information"
will ultimately reach steps 8 and 9 of the perceptual processes.
Meaning that the "information" that finally comes out
as understanding will be the sum of the in-put plus whatever the
in-put gets filtered through in the case of each individual.
If matches to the in-put "information content" are found in the memory library, THEN a kind of duplication can take place. The duplication is called "understanding."
But if matches are not found, then the
information content probably will be routed through the nearest
similarity in the memory library. In this case, we are now one-step
or more removed from duplication (and removed from "complete
If no matches are found, then the recipient of the in-put information content will "draw a blank" - for example, regarding twelve types of snow, camels, telepathy or clairvoyance.
In other words, INFORMATION is what we
understand, even if only in a partial way. If the in-put does
not result in "understanding," then it is NOT information.
The whole of the above, and its obvious problem areas, is what some information theorists refer to as the information transfer process.
One of the central concepts of information theory is that all information is available all of the time.
Some of the theorists mitigate this all-inclusive concept by saying that information sources are everywhere.
Others opine that information can be drawn from everything and anything.
In the sense of all of the above, the
EXISTENCE of information is not in question. What is problematical,
in big-time ways, is the TRANSFER of it into a system wherein
it can be duplicated, misduplicated, or blanked out.
In the sense of the human, the prevailing consensus reality concepts usually hold that the "system" being referred to is "the mind" and its mental information processes.
"The mind," however, when spoken
of this way is applicable as a generality to every human specimen,
and which is good enough for a theory.
In matters of actual PERFORMANCE, though, the "individual mind" should be substituted for the all-inclusive generality - because even if information does exist everywhere, it is the individual mind that produces duplication, misduplication, or the blanking out, and which in turn result in understanding, misunderstanding, or nothing at all.
Please note that the term PERFORMANCE
has been emphasized above because it is entirely relevant toward
activating the superpowers, "activating" having to do
with performance. And here I foreshadow a topic that will require
at least two essays among those several more to come.
Information transfers via speech, print or in visual formats, actually contain two MODES or MODULATIONS of information content.
But to get at this, it must FIRST be comprehended that the words of speech or writing/print the images, charts, etc., of the visual formats are NOT the information content itself, but merely symbols and signs for it.
In this sense, the symbols and signs are the OBJECTIVE "carriers" of the information content - which is to say that they are SIGNALS that will stimulate duplication of the content simply because the receivers associate MEANING to the signals - IF the meanings of the signals are shared in common.
If the meanings are not shared among the recipients, then the signals will be "inaccessible" to all those who do not.
And here is one of the most apparent
bases for language and its concepto-nomenclature - to establish
a shared and sharable basis for the sending and receiving of information
This is to say that pre-set meanings are encoded into nomenclature and images, and the consensus reality learning networks transfer the encoded meanings into the memory storage of their citizens so that there can be a mutual basis of information transfer and exchange. An intrasocial collective or group is thus formatted regarding transfer of information within it.
The best pre-set words or images to effect
this information transfer unity are those that have precise meanings
encoded into them, since the meaning-information-content can be
"recognized" most easily.
Any increasing permutations of meanings
regarding a given information transfer signal tend to decrease
the cohesion of the unity within the collective, and tend to permit
distortions of meaning within individuals.
One would therefore think that precise
and exact meanings for signals would be stringently established
by social consensus necessity. And indeed this IS the case where
an absolute need to do so is apparent, the "need" being
intimately related to performance, and especially where it is
found to be dangerous not to be precise.
For example, no one becomes an electrician based only on the general consensus reality that electricity lights up bulbs and turns the toaster on.
A suitable and precise nomenclature has to be evolved and become shared among potential electricians - or else they can get fried all too easily. Airline pilots can not become one simply because airplanes fly. Arctic people cannot deal with snow simply if it is snow, and Arab Bedouins will be out-maneuvered in the economics of the camel market if they think a camel is a camel.
However, within any given social unity
where there is no perceived absolute need to INCREASE nomenclature,
that kind of effort is not usually undertaken - because the average
citizen within the unity, and with regard to average performance
within it, can function quite well via a lesser rather than an
increase in signal-carrying nomenclature.
And, to begin with, the so-called average citizen probably won't ever "acquire" a nomenclature in terms of quantity that extends beyond his or her recognized need to do so, or beyond what it takes to fit into the consensus reality they desire to fit into (or, sometimes, are trapped within.)
So the average citizen within any given
consensus reality had no explicit or necessary need to add more
specific nomenclature; but there is also a need not to have too
The way this is apparently resolved is to establish a number of IT-IDENTIFIERS that do not require much further break-apart into it-TYPES, into increasing refinement of comprehensions of types of something, and which would require the increase of nomenclature.
In this way, then, people who do not need to use different types of snow for survival can be content with snow as something that falls in winter and needs to be shoveled when it interferes with traffic or might crush the roof in. So, among such people, SNOW is snow. It is a perfectly good information signal, and the need for any increasingly refined differentiation beyond that probably has to do only with amounts of it.
So, among such people "SNOW"
is a "clean" and "clear" signal regarding
information transfer, whereas among the Arctic peoples barely
fifty years ago it would have been as "noisy" as Times
Square at New Year's Eve.
In much the same way, people who don't realize that different types of clairvoyance exist will not have any need to identify them - meaning that the single use of this one nomenclature signal is perceived by them to be sufficient.
But not to anyone who wants to learn how to be clairvoyant. The best instructors of clairvoyance I am familiar with have to begin, as they do, by breaking the single concept apart, at least into "aspects" of clairvoyance.
So, here we now approach the concept
of "clear" and "noisy" signals, this concept
revolving around whether or not the carrier (word or image) of
a signal is a precise, thus a clean one, or whether it induces
noise into the signal load.
And it is at this point that the essential
problems of information transfer integrate with the basic information
theory offered up by Shannon in 1948, the basic problem regarding
information transfer OF ANY KIND having to do with the ratio between
"signal" and "noise."
Please note that in preparation for this series of essays, an earlier essay dealing exclusively with the SIGNAL-TO-NOISE RATIO has been available in this database for several months. That essay can now be appended to this series' essays as Part 4A.
As stipulated within information theory by Shannon, a message (information content) proceeds along some channel from the source to the receiver.
In line with our interests, information is in-put via some kind of "channel" to the receiver, who then out-put it in terms of information encoded into concept-nomenclature for further information transfer.
But the in-put itself is an information transfer from "a source" wherever or whatever it might consist of.
We are thus dealing with TWO information
(a) the in-put transfer, and
(b) the out-put transfer.
Between (a) and (b), however, is "a channel," and after (b) is concluded another "channel" is necessary to further accomplish an information transfer.
So we can think in terms of the in-put
channel and the out-put channel, the in-put channel having to
do with reception of the information, the out-put one having to
do with what we call "communication."
In the human sense of all of this, the
out-put transfer (the "communicating") must first be
encoded into concepto-nomenclature that can be transferred to
others simply because their mental information processing equipment
is already encoded to receive and duplicate it.
All of this seems clear enough, doesn't it.
However, there is one serious glitch.
You see, the in-put transfer ALSO has to be processed INTO the
same mental information processing equipment in order that it
CAN be "received."
If that mental information processing
equipment (which now has to do DOUBLE duty regarding in-put AND
out-put) is not pre-formatted with some exactness regarding both
quantity and quality of the in-put, then the "channel capacity"
will be LESS than it needs to transfer the full information load
into the receiver system.
If this is the case, then the out-put transfer will be only a partial one, or perhaps hardly anything at all. If it would be the case that the in-put and out-put channel cannot MATCH any of the signal, then the signal will disappear into the blanked out thing.
In basic information theory, anything
that hampers, distorts, confuses, obliterates the signal is referred
to as "noise."
In this sense, if the noise "in" the channel is less than the signal, then a code exists (or can be established) for which the frequency of errors (noise) may be reduced as low as desired.
If the "noise" in the channel is greater than the signal, then the signal may not be identified; it can still exist in the channel, although so embedded in the noise that it cannot register, be picked up, or identified.
In the sense we are interested, the human
sense, it turns out that human mental information processes ending
up in "perception" can produce not only signal-laden
but noise-laden conceptualizations and mental image pictures with
hardly any way to discriminate which is which.
In answer to this question, the daring among us will assume that the noise originates in our own heads - and which is usually the case.
But a deeper inspection of noise sources reveals that what's in our heads and which contributes to the noise may not be innately present to begin with.
A better part of the noise sources in
our mental information processes is ACQUIRED - usually by the
enculturization processes that make us fit in our given consensus
This understanding is rather broadly accepted in some echelons of human inquiry, especially if the consensus reality social processes drift into mind-programming rather than overall efficient education.
But there is another far more powerful,
but far more LESS obvious, noise source, and it is one we all
adapt to in order to learn to communicate.
As Shannon pointed up in his information
theory (and much to the shock of many at the time) that one is
"constrained" to utilize language - and with language
comes the concepto-nomenclature that becomes lodged, by necessity,
into our memory library.
I'll paraphrase how Shannon put it.
Regarding English, some fifty per cent
of the concept-nomenclature we lean upon is required by the structure
[and familiar usage] of the language. The other 50 per cent is
open to free choice of concepts and nomenclature.
Shannon's implication was that if the language-determined part was inhabited with noise-making redundancies, then any adaptation to the language would induce these into mental information processes of ALL those who utilized it.
So, you see, we are not at each individual level "guilty" of faulty information processing - at least 50 per cent of the time.
But whatever their source, even the 50
per cent presence of noise-making viruses can easily decrease
or prevent performance ever activating.
As it turns out, although noise-making
redundancies can be identified in every area of human endeavor,
some are more prone to a larger percentage than others, especially
those that have become adapted to ambiguity. Dare I mention politics
and over-bloated administrations? Or the present conditions of
the "fine" arts? Or the parameters of "love,"
"hate," "sex?" Of course, I'll not mention
the realms of "psychic phenomena" - since everyone knows
what they are.
In any event, it might be said that where
over-simplification and ambiguity prevail, so too do noise-making
redundancies - all of which bury the signal within the noise,
no matter how fashionable is the noise.
It's somewhat worth mentioning, generally speaking anyway, an area of human endeavor thickly populated with noise-making redundancies tends to be "volcanic" in nature. Such areas can exist peacefully within their own parameters, stabilized by their own consensus realities. But if intruded upon, or if THEY intrude upon, things begin to heat up.
The topics of information and information
transfers will be picked up again in additional essays.
It is now desirable to devote Part 5 to a correlation of what has been discussed in Parts 1 - 4.
In Part 6, we'll discuss not only the noise-making redundancies embedded and perpetuated within ambiguities, but their utterly destructive viral effect on clean, clear "signals." Ambiguous concepts induce structure-lessness, hence they wreck any signal-awareness of STRUCTURE, and without knowledge of the structure of anything very little else can ever be known about it. As we shall see in subsequent essays, STRUCTURE is the IN + FORM, or the format, of something - and as such is what needs to be worked with or within, not against.
In any event, any real attempt to activate any of the superpowers must encompass the reality that signal-to-noise ratios are intimately involved. Thus, the presence in any system of disinformation or misinformation can act as if it is infected with viruses.
(End of Part 4).