Some ideas to elaborate on

  • how a child evolves the skill of using language: at first he just accompanies gestures with inarticulate shouts, then progressively uses articulated words but still as an almost-automatic companion to gestures, then progressively learns how the words he articulates can produce effects on the environment by their own, etc
  • memories do not necessarily have to be associated with a sensory modality: e.g. the memory of a phone number, or of a theorem, are not associated with neither vision, nor sound, nor touch, etc.
  • cognition as the activity of storing and manipulating mental entities; these mental entities are all interconnected and thus operating on one entity also involves operating on many other entities, much in the way one would operate on a physical entity which is physically connected to other physical entities. In this metaphor, some of the mental entities are accurate representations of physical entities by effectively integrating immediate sensations, while other mental entities may be "purely mental"; in this way the line between a physical action and a cognitive process is blurred, and, at the limit, eliminated all together.
  • coordination defines entities, and various kinds of coordination define various types of entities. e.g. an actuator is a physical entity for which the mechanical parts act in coordination in order to achieve a goal, i.e. each movement of each part contributes to achieving the actuator's final objective
  • attention as a coordinator of cognitive sub-processes; similarity with the role of the central nervous system: whereas the CNS coordinates mechanical components (the body parts), attention coordinates processes (the mental processes)
  • distributed coordination mechanisms as an alternative to centralized coordination mechanisms: for example, fish swarms, or bird flocks, exhibit spatio-temporal coordination by virtue of a distributed system represented by a set of rules that is followed by each element in the swarm. In other words, although coordination in a multi-element entity may be achieved via a central coordination mechanism that gives different instructions to each element, it may also be achieved by having one single set of rules that is followed by every element.
    • note that the requirements for the elements' control structures in the two cases above are very different: whereas in the first case differentiation is not only allowed but also beneficial, in the second case the elements should have a more-or-less common control structure in order to be able to execute a set of commands that is common for all of them. This observation also leads to a third model for the elements which is a generalization of the first two, where each element includes both a specialized control structure (to be used for receiving specific control messages) and a common control structure (used to receive non-differentiated control messages).
    • the analysis of the centralized coordination mechanisms can be made from an evolutionist perspective: while a body of [nearly] identical elements can be coordinated by a uniformly distributed mechanism (e.g. a set of rules that governs a homogeneous environment in which the elements operate), as specialization occurs among the body elements the control signals that they will receive will also have to become specialized; this specialization of control signals, together with the need for coordination of the body elements, may constitute an evolutionist pressure that drives the appearance of a centralized control structure as we know it in the evolved creatures (note that this approach can be used to analyze the way societies of living beings are organized, depending on the level of specialization of their members).
  • an evolutionist approach to intelligence: what are its parts (or components, or "organs"), how might these parts have evolved as a response to evolutionist pressures (and how attention might have evolved as the coordinator of these organs), what kind of evolutionist pressures was intelligence subjected to - both as a whole and its comprising parts, how did the evolution of intelligence (and its components) might have pressured the evolution of its material substrate, etc
  • the importance of specialization, and the implication that there may be several distinct mechanism responsible for the various abilities that an agent possesses; however, also note that evolution is incremental, such that many sub-modules may well be reused
  • an intriguing aspect when investigating language from an evolutionist perspective: whereas most of the features/abilities that a living being possesses can be analyzed as an evolutionary advantage that an individual gains in its relations with a given environment, for language to be evolutionarily useful it apparently requires that a rather large number of individuals all gain the ability of language simultaneously. However, there are (at least) two immediate objections that can be brought to the above remark: 1) a harmless mutation can well spread in a population even if it's not immediately useful, and only become an evolutionary advantage after a certain period of time when the environment (as a whole) "is ready to make use of it", and 2) maybe one should also investigate the emergence of language as a form of communication with a given environment instead of with other individuals, or as a form of making better use of the information that an individual is learning, or even as a form of "communicating with oneself"
    • note: the remark that a harmless mutation may propagate throughout a population without any immediate benefits, while providing a delayed evolutionary advantage, should be kept in mind when analyzing other traits apart from language
  • a wonderful and not immediately obvious example of how apparently non-conflicting features do actually conflict, and thus one of them can loose ground in face of the other in the course of evolution (in this example, trichromacy vs bicrhomacy):
    In the case of primate color vision, trichromacy based on the "new" M and L pigments (along with the S pigment) presumably conferred a selective advantage over dichromats in some environments. The colors of ripe fruit, for example, frequently contrast with the surrounding foliage, but dichromats are less able to see such contrast because they have low sensitivity to color differences in the red, yellow and green regions of the visual spectrum. An improved ability to identify edible fruit would likely aid the survival of individuals harboring the mutations that confer trichromacy and lead to the spread of those mutant genes in the population.
    Scientific American - Color Vision: How Our Eyes Reflect Primate Evolution

The evolutionist approach to Cognitive Architectures

It has been said that a problem that is well formulated (in a certain language) also contains its solution (expressed in that same language). In this respect, probably the most important obstacle that AI research has been confronted with was the lack of a clearly-defined presentation of the problem that it tries to solve: specifically, although there have been many projects dealing with the field of AI in the past decades, most, if not all, of these projects have never been able to clearly formulate the problem they are trying to address, and thus it only came as natural that they haven't been able to devise a strategy to solve said problem.

Enter evolution.

An evolutionist approach to AI not only has the capability to clearly define an objective for the AI research problem, but it also naturally suggests the expansion of the AI research effort towards a more general goal, namely the development of a generic cognitive architecture. In an evolutionist approach, the problems that arise when trying to investigate the huge variety of cognitive architectures that can be found in the natural world can all be framed inside a coherent analytical strategy which studies the incremental adjustments that a cognitive architecture incorporates as it faces the myriad of adaptation problems it is confronted with.

Based on this kind of "evolutionist framing" approach, a crucial breakthrough can be achieved in the investigation methodology of the cognitive mechanisms, by following a clear research guideline which essentially consists of analyzing the evolutionist pressures that have sequentially shaped the various cognitive modules' architectures that can be found in the natural world, from simple insects - and below, all the way up to the modern human - and above. By systematically analyzing one such evolutionist pressure at a time, an evolving model of cognition might be developed by incremental additions of new modules that each answers one specific evolutionist pressure, thus effectively retracing the footsteps of evolution.

Under the hood of awareness

Behind what is subjectively perceived as "the unity of the self" (or "the unity of awareness") there lies a "bubbling surface" of parallel processes which, although they operate autonomously, they also interact with one another in numerous ways.

  • for example, imagine driving a car and in the mean time having a conversation with a passenger: there will be numerous autonomous, but inter-connected and coordinated, processes involved in the proper handling of this compound task, each of them dealing with an autonomous activity e.g. the subconscious steering of the car such that it stays on course, the subconscious articulation of sentences which are grammatically correct, the subconscious parsing of the passenger's replies, etc.


Mind, mental state, and the cognitive processes

In the representation above, the cylinder-shaped container represents one's mind, the "bubbling surface" represents the entire state of mind at a certain moment in time, and each individual bubble represents one individual process that occurs in one's mind (each such process is not only correlated with the rest of the processes, but it also may be connected to various body sensors and actuators).


Awareness and non-awareness
Let us now posit that any sensation that we can ever be aware of is directly related to the activity of a specific area of one's mind (this thesis seems to be supported by experimental results); in this case, we can identify areas on the mental state who's activity would correspond to the various kinds of sensations that we are capable of feeling:


Sensation mapping on the mental state

In the sensation map representation above, the areas that correspond to possible sensations (e.g. the sensation of "seeing a mental image", or "hearing a mental sound", or a "feeling a touch", etc) have been deliberately limited such that they do not cover the entire area of one's mental state, thus revealing the possibility that some mental processes occur outside the reach of awareness. In other words, we posit that only a limited area of one's mental state is within the realm of awareness (i.e. the processes that occur in that area have a direct correspondent in sensations), while there may well exist many more "hidden processes" in one's mind who's existence can only be inferred by observing the "sampling points" that are accessible through our sensations.
  • note: if we consider a "mental state" as depicted in the above diagrams to correspond to the overall state of a complete (mammal) brain, then the hypothesis of non-sensible processes is experimentally confirmed by the fact that there is no specific (i.e. "direct") sensory representation of the activity that occurs on the surface of e.g. the cerebellum.

Process interconnectivity
The "bubble processes" in the diagrams above are autonomous but not independent; specifically, various levels of interconnectivity exist between separate processes, and indeed each single process may be composed of a multitude of autonomous, locally interconnected sub-processes. In fact, based on what is known about the structure of the brain, we can posit an elaborate, and biologically consistent, pattern of connectivity between mental processes based on an underlying 'connectivity grid of the mind' which connects any two different regions, large or small, neighboring or not, according to a certain 'connectivity map':

Multi-layer inter-process connectivity grid

The connectivity grid can be regarded as a many-layer construction, with each layer being dedicated to to a certain connectivity range. Furthermore, the implication of multiple physically identifiable layers may (but needs not) be disposed of, allowing instead for a "continuum of layers" image in the sense that the distinction between layers is not physical in nature, but rather it is based on the connectivity property of their "building bricks" (i.e. we'd call a set of elements that are interconnected within a certain range of distances a "layer"). In other words, we can draw a connectivity map for the short-range connections (the short-range "layer"), another map for longer-range connections (the longer-range "layer"), a.s.o. until we'd reach a 'inter-regional connectivity grid' that would represent the connections between macro-regions (such as e.g. the sensation-specific areas), much in the way various function-specific areas of the brain are interconnected at the macro-level.


Levels of organization
The presence of the multi-layer connectivity grid reveals the "bubbling surface" model as a recursive architecture at various levels of detail (i.e. at different "zoom levels"). In this architectural scenario, a mental process actually consists of several "sub-processes", with each sub-process consisting of other "sub-sub-processes", a.s.o. until we'd reach a very low level of 'atomic processes' that are effectively supported in hardware by 'architectural atoms'. In other words, each individual autonomous process is composed of a number of parallel autonomous sub-processes, each such parallel sub-process may in turn be composed of other parallel sub-sub-processes, etc, resulting in some sort of a hierarchy of processes that all contribute to the overall activity of their common 'super-process'.
  • for example, the process of steering the wheel of a car during driving is in fact composed of a myriad of networked sub-processes that control the activity of all the muscles involved in steering, etc, with the extra remark that these individual sub-processes (or even their sub-sub-processes components) may, or may not, be within the realm of attention at a given time.


Recursive "bubbling surface" model

At first sight, one might be tempted to identify a hierarchical architecture in the diagram above, but this is not the case: what the "bubbling surface" model provides is in fact a distributed processing architecture of interconnected architectural atoms (these would roughly correspond to the cortical columns in the biological cortex) that allows the generation of hierarchical patterns of processes. Unlike a hierarchical architecture, the hierarchical organization of processes is not static, as it involves the dynamic creation, merger, migration, destruction, etc, of processes, all supported by the common ground of the underlying architecture's distributed structures.


Competing super-processes
For any given sub-process there may be occasional competition between several higher-level super-processes that all try to "incorporate" said sub-process as one of their own components. For example, if we try to reach a glass of water with one of our hands and in the mean time one of our cheeks starts feeling itchy, our hand may be automatically (i.e. subconsciously) diverted from reaching to the glass of water and directed towards our cheek (the fact that this process is many times subconscious is illustrated by the fact that it will usually take several fractions of a second after the hand changes direction for us to become aware of the new direction of our hand; in fact, we will usually become aware that the hand has been "hijacked" for scratching our cheek only when we effectively feel our cheek being scratched, i.e. this action was controlled by a mechanism outside the realm of our awareness). What happens in the above scenario is a competition between two super-processes, both requiring the control of a single physical resource by means of a dedicated sub-process: eventually, in the case of normal (undamaged) brains, the usage of the unique resource will be somehow arbitrated (unconsciously and/or consciously, depending on the situation) such that only one of the competing super-processes will effectively control the resource by "coordinating" a dedicated sub-process that ultimately controls said resource
  • note: if the mechanisms that arbitrate the competition between super-processes for the "coordination" of a sub-process do not function properly, this condition will lead to various localizations, and levels, of incoordination. Some evident cases of severe incoordination are those involving the motor behaviors as they can lead to erratic, and even self-destructive, bodily behaviors (e.g. a specific genetic mutation has been shown to disable the coordination of the mouse limbs and trunk), but the lack of coordination can manifest itself even in strictly mental activities, e.g. a chronic inability to focus (as in the attention-deficit disorder), or the inability to follow a logic argument, etc.

Specialization

Features as elements of association

Associations are usually not made on raw data; instead, certain features of the raw data are identified, and the mechanism of association then recalls some other data that has one or more similar features. In other words, the notion of 'similarity' applies to features of the data sets, i.e. the "data type" used when evaluating similarities is 'feature' and not 'raw data'.
What is the nature of the raw data involved in associations, what are the inbuilt feature extraction mechanisms, and whether or not feature extraction mechanisms can be learned, are all essential to understanding the mechanism of association.


Static-data associations
A very common type of association is the one made on static data, e.g. still images or invariant sounds. The defining characteristic of static-data associations is that the element of time is irrelevant in performing the associations.

  • example of static-data associations on images (a real-life example that happened to me)
    i was listening to a radio show that played a psychedelic tune, and at the end of the tune the d.j. made a comment about that tune allegedly suggesting a colorful image with randomly placed and randomly colored bubbles. I was paying attention at what the d.j. was saying, and as he was describing the allegedly suggested image, a mental image was incrementally "maturing" in my mind following his description. I then closed my eyes in order to better visualize this image, and, after briefly having an inverse-video representation of my mental image, i instantly had a recollection of a another similar image consisting of a letter written out of colored bubbles among other color bubbles. Although the image recollection was instantaneous, it came without any context information, i.e. i did not automatically remember where and when i have seen that image before; instead, i had to consciously focus on finding the context of that image, and only then did i realize that it was part of a vision test that i was submitted to when enrolled in the army (i will mention here that the time between this recollection and the army vision test was about eight months).

    In this example the elements of association were exclusively image features, namely the bubble pattern in the image and the colorfulness of the bubbles, and the association was performed on purely static data (i.e. the trigger image and the recollected image) with no time element being involved.
  • example of static-data associations on sounds
    Simply hearing a certain sound with a certain timbre may remind us of a tune that uses (e.g. begins with) that sound, or listening to a voice recorded on a tape will enable us identify the person who's voice is recored. The features of the time-invariant sound itself are the exclusive elements of association, i.e. the associations are performed on purely static data (i.e. the non-fluctuating sound) with no time element being involved.

Dynamic-data associations
Associations can also be made on features of dynamic data, i.e. the very dynamics of a data sequence can be associated with a similar dynamics of another data sequence. As is the case with static data, the notion of 'similar' applies to features of the dynamic data (both the triggering data, and the recollected data), but in this case the features will include the element of time.
  • example of dynamic-data association on sound
    it is enough to hear a certain tempo of a tune (e.g. the starting tempo) to be able to say that the tune resembles another tune.

  • a more complex example (involves trans-modal association, details below)
    let's imagine we watch an old black-and-white movie in stereotypical fast motion, and then, later that day, we're searching for a tune on an audio tape by pressing the 'cue' button on the tape recorder: when we will hear the fast-forwarded sound on the tape we will (probably) immediately recollect us seeing the fast-motion movie. In this example, the dynamics-related common feature is 'fast' (or maybe even 'fast forward') and it is this feature that is being used in the association, i.e. the association is based on a feature extracted from the dynamics of the data.
    • note: in this example, the end-to-end association process also involves the concept of "fast" (or possibly "fast forward").

Trans-modal associations
Another important remark that can be drawn from the example above is that the type of data involved in triggering an association can be different from the one type of data that is returned by the mechanism of association: in the example above, the triggering data type was auditory, while the recollected type was an entire scene of us watching the movie (this scene would consist of a fuzzy mental image, possibly accompanied by some sensations related to how comfortable we felt in the chair where we were sitting while watching the movie, etc). While the element that triggered the association was a feature of an auditory signal, the recollected data type is much more complex (it's a scene) and the recollected data feature was purely visual.

In fact, the feature extraction mechanism can go much further then it was illustrated by the examples above: specifically, just about any sensation that we have, or even mental states that we are not aware of (e.g. the "presence" of a concept in our mind), can be elements of associations.
  • example 1:
    let's imagine that we have a rolled-back video tape and we want to position it to a certain scene (e.g. we want to show that scene to a friend): What we will do is to take the tape, then approach the VCR, insert the video tape in the VCR, and finally we'll cue the tape in search for the scene we're looking for. Now let's imagine that, later that day, a friend calls and asks us for a certain audio tune, and let's assume we do have that tune recored somewhere on an audio tape. We'll answer to our friend that we have it she can come and take it. What we shall do next is to go to our tape rack, pick up the tape, insert it in the tape recorder, and then cue in order to find the desired tune. Let us now consider two scenarios:
    • if we were paying enough attention to the intentionality of finding a specific position on a tape in both situations, then it is very likely that the moment we'll start being aware that we are about to search for the position of a tune on the audio tape, that very moment we will have a flash-back with a recollection of us searching for the video scene on the video tape earlier that day. In this case, the very intention to find a position on a tape will be the element of association.
    • if, on the other hand, we were not paying enough attention to the intentionality of searching for a position on a tape either when searching the video tape, or when searching the audio tape, or both, then we might still get a flash-back the very moment we press the cue button on the audio tape recorder and start witnessing the process of a machine cueing a tape. In this case the, the very scene of a tape machine cueing to a position on the tape will be the element of association
  • example 2:
    Scene 1: I have to take the subway from station 'Lorem Ipsum'. As i climb down the stairs, i see a lit panel with the name of the station. Unfortunately, i do not remember if i actually read the writing on the panel or not.
    Scene 2: I am sitting at my desk writing a paper. On my left there is a bottle of wine wrapped in paper. At the left of the bottle there's a book that i use as reference for my work. While writing, i need to check for something in the book, so i turn my head, read what i have to read (didn't have to turn pages), and then i re-direct my eyes back to the paper that i'm writing (which is in front of me). Instantly, i have a recollection of the lit panel with the 'Lorem Ipsum' written on it, but i'm not yet aware of the surroundings of my mental image; it will take a conscious action to mentally "look around" the panel and see the entrance of the 'Lorem Ipsum' subways station.

    What actually happened in this case was that, while i briefly turned my head towards the book on my desk, my eyes must have briefly scanned the label on the paper in which the bottle of wine was wrapped, but i am absolutely sure that i did not consciously read the label. The label contained the name of the store from where the bottle was bought (it was not me who bought it and i had no idea where it has been bought from), and the address of the store was 'Lorem Ipsum 123', etc. Apart form the name 'Lorem Ipsum' itself, there was no resemblance whatsoever between the shape or "style" of the writing on the gift paper and the one on the subway station panel, there was also no resemblance in the color or shape of the individual letters themselves, etc; actually, all the optical features were quite different.
    There are two important remarks in this example:
    • first, the association involved a synthetic construct related to language, i.e. the name 'Lorem Ipsum', without involving any sensations (i neither heard the name of the station being "pronounced" in my mind, nor have i consciously seen - let alone read - the label, etc). Because of the inability to sense the triggering element of association (no sensation was involved), the association was made completely outside the realm of my awareness. (when this episode happened i started to analyze what caused it, and it actually took me quite some time until i've seen, and this time consciously read, the label on the bottle, and only then did i understand what had actually happened)
    • second, while this association was triggered by a synthetic data that has no commonalities with an image, i.e. the name 'Lorem Ipsum', the association retrieved a pure image. As mentioned before, i was completely oblivious to any context information when the subway station panel's image popped up in my mind, and i had to mentally "look around the panel" to start filling in the context from memory (i.e. to realize that what i actually remember is an image seen a few days ago when entering the subway station). In other words, this example is also an illustration for the fact that the type of data that triggers an association and the data retrieved by the association need not be the same.

Language as a tool

The very expression "use of language" is an invitation to investigate language from the functional perspective of a tool; such a perspective might then allow us to draw a parallel between the evolution/invention of language and the evolution/invention of the various tools that a significant number of living beings can use (to various extent), and even build.

  • for example, certain crows (new caledonian crows) can perform various tool manipulations, from picking up and using a stick in order to get food from an otherwise inaccessible location, to actually building tools themselves and even using meta-tools; some primates (e.g. the great apes) are capable of an even larger variety of tool usage, including building tools for some activities that are not (immediately) related to basic instincts, e.g. making a toothpick out of a branch.
From the language-as-a-tool perspective, not only can language be seen as a tool, but it is probably the most powerful tool ever invented. By being able to communicate operating instructions to a potentially unlimited number of other individuals, the acting force of one individual can be boundlessly multiplied; also, because language is not a physical tool, there are virtually no "manufacturing costs" and intrinsic "usage costs" associated with it. The very low cost of using language, coupled with its extremely high action potential (as mentioned before, it can be seen as a boundless physical force amplifier), together reveal language as a tool capable of shifting the balance of power inside an ecosystem by allowing individuals that are skillful in using language in their own interest to overwhelm the physically powerful, but less language-skilled individuals. In this way, the advent of language is a game-changing phenomenon on the evolution scale as it may radically change the darwinian fitness function that governs an entire ecosystem; essentially, language is paving the way to the supremacy of intelligence over physical power by means of "hijacking" the physical power of others.


Language as a body extension
The parallel between language and tools also allows us to approach language from the tools-as-body-extensions perspective, which essentially states that the repeated use of a tool can make that tool become cognitively perceived as an extension of the body, with the special note that the "evolution" of tools is shaped by a different kind of evolutionist pressures than the biological body parts. Despite accepting "body parts" who's evolution and structure are not strictly depending on the laws of biochemistry (i.e. the tools, including language), this perspective over language is however an evolutionist approach anchor as it suggests an evolutionist continuum that spans the biological body parts, the tools, and finally language, even though the evolutionist pressures are of a different nature in each of the cases.


The linguistic production mechanism as an actuator
Within the language-as-a-tool framework, the mechanism responsible for producing the linguistic constructs (e.g. a simple sentence or even an entire succession of phrases) can be regarded as a body actuator, i.e. a specialized body part with dedicated capabilities for handling language as a tool.

An eloquent illustration of this perspective is a 4 year old boy sitting next to his mother and talking to her: when he speaks, there will be many occasions when he will use language as a form of action, e.g. he'd say 'open your purse and give me your mirror'; we'd then see the child totally focused on his mother's hands as she starts digging her purse in search for the mirror, his mind becomes completely captivated ("trapped") in awaiting for the process to finalize. What he does when he speaks out loud such a request is that he actually elaborates and controls an action: specifically, he essentially opens the purse with his mother's hands. The linguistic production mechanism can in this case be regarded as an organ with which the child acts with much confidence in the outcome, much in the same way he would act if using his own hands.

The above model can also be observed in the seemingly different situation of a usual conversation between two people: for example, the process of saying something to someone (e.g. 'hey, you really need to check this out, [...]') can represent an intentional act of giving an information to the interlocutor, and the act of giving something (the information) can be regarded as being performed by means of exercising the linguistic production mechanism as an actuator (any action that is being performed implies a mechanism that performs it - the actuator).
  • note 1: the act of intentionally conveying an information may or may not be accompanied by a conscious representation for the reasons behind that act; having a conscious representation for the reasons for which one wants to say something to someone is not a requirement for being able to actually say what one wants to say
  • note 2: not all phrases exchanged during a conversation need to be the result of an intentional act of giving [an information]; on the contrary, most parts of a usual conversation will "flow naturally", with the original intention of giving an information being temporarily suppressed (e.g. in the middle of a conversation one may find oneself saying: 'oh, but sorry, i don't know why i am telling you all this. let's go back to what i wanted to tell you in the beginning)

An evolutionist approach to language

One approach for analyzing the structure and function of language is to investigate the links between language and the mechanisms that use, produce, and analyze it, and then redirect the analysis effort towards said mechanisms themselves. If we can indeed reduce our analysis of the vaguely defined concept of 'language' to the analysis of physical mechanisms, we could then investigate said mechanisms from various perspectives, and in particular from the evolutionist perspective according to which "a functional requirement can guide the natural creation of a corresponding mechanism".


The linguistic structures
Let us first look at the interaction between the linguistic production mechanism and the environment, and try to identify how the linguistic structures manifest themselves as part of such an interaction.


Linguistic structures

The diagram above assumes the existence of an identifiable 'linguistic production mechanism', and said mechanism is depicted in direct contact with a 'communication substrate' by means of which it interacts with the environment. From this perspective, the linguistic production mechanism effectively acts over the communication substrate, i.e. its inner structure necessarily "terminates" with an actuator positioned at the junction with the communication substrate. In this context, the linguistic structures can now be seen as the set of shapes that the communication substrate takes when submitted to the action of a linguistic actuator.
  • for example, in the case of spoken language, the communication substrate is the air around the speaker, and the set of shapes that this substrate takes at successive moments in time is represented by the instantaneous sounds (phonemes) that can be heard. The linguistic structures, may they be static (i.e. the set of phonemes used) or dynamic (i.e. the way the phonemes are chained in time), are directly reflected in the shapes that the communication substrate takes, i.e. in the sounds that can be heard in the air.
If we look at the linguistic structures (both static and dynamic) from this perspective, they can now be considered as properties of an object (i.e. of the communication substrate), and they are a direct reflection of a "modeling activity " performed by the linguistic production mechanism. Thus, based on this remark, we can now reduce the analysis of the linguistic structures to the analysis of the linguistic production mechanism.


Language in a sensori-motor loop
Based on the remark that the linguistic production mechanism is effectively acting upon, and reshaping, its environment, we can now identify a sensori-motor loop that involves the use of language and that consists of a transmitter, the environment, and a feedback path from the environment back to the transmitter:


The language sensori-motor loop

The diagram above is very general, and is applicable not only to natural language (in all its forms, e.g. spoken language or sign language), but also to any form of activity that involves an interaction with the environment based on a sequence of activities that exhibits linguistic properties.
  • for example, the diagram above can represent a system in which a crow is performing a complex succession of activities required for accessing food (new caledonian crows can even build tools and make use of meta-tools in order to accomplish a goal): in this case the linguistic production mechanisms will not generate a succession of gestures required to articulate sounds, or type a sentence, etc, but rather the succession of activities (the "words") involved in the goal-oriented activity (the "phrase") of gathering the food. Moreover, a similar linguistic model can constitute the basis for even far more common activities, e.g. the succession of activities that a dog performs when he buries (or recovers) a bone.
An important remark to be made at this point is that the individual activities that are being sequenced into a new activity are (usually) themselves complex combinations of other activities.
  • for example, when a dog performs the activity of hiding a bone, this activity involves (a specific sequencing of) other complex activities, e.g. digging and walking.


    A dog's activity of hiding a bone
  • note: any activity that is executed by an actuator, no matter how complex, ultimately has to consist of a succession of simple actions because of the serial nature of the actuator. Also, because most of the common activities are not performed via one single actuator, several parallel successions of actions are usually required (one for each actuator).

The natural language loop
Let us now look at the special case of natural language, where the environment contains a receiver cognitive agent that analyzes, and responds to, a natural language-based message sent by the transmitter.


Natural language loop

Two closed loops are superimposed in the diagram above: the generic loop described in the previous paragraph where the recipient of the message (which is also the feedback originator) is "hidden" in the environment (path: transmitter cognitive agent -> environment -> transmitter cognitive agent), and the special-case natural language loop that explicitly requires the presence of a natural language analyzer at the receiving end of the transmission chain (path: transmitter cognitive agent -> environment -> receiver cognitive agent -> environment -> transmitter cognitive agent).
  • note: the details of both feedback paths are not relevant at this time, and thus they are only indicated by means of their originating and terminating ends.
The form in which the natural language loop is illustrated above allows us to approach the analysis of the linguistic mechanisms from a evolutionist perspective: we can now try to identify the physical elements on which natural language depends, and then determine what evolutionist pressures are acting upon them and how these pressures can shape their functionality (and thus their structure).


Top-level structure of the natural language path

In order to attempt analyzing the language mechanism from an evolutionist perspective, we first need to see if the evolutionist paradigm can be applied to it. This problem now raises the question of what, if any, evolution capabilities does the mechanism of language posses: should we determine that evolution capabilities exist, it then results that various evolutionary pressures (which will be investigated later) can indeed shape the structure of this mechanism.


Top-level evolutionist decomposition of the natural language path
  • note: the above diagram focuses on the linguistic analysis and synthesis structures, and is not meant to exhaustively include all the information and control pathways in the system. Also, because the nature of the interaction between the linguistic synthesis/analysis mechanisms and the rest of the cognitive structures (the 'cognitive controller') is as yet undetermined, said interaction is illustrated by a generic bi-directional information pathway
The image above represents a top-level decomposition of the natural language path into its key components, based on the ways that each component could have evolved: specifically, the green modules are the oldest ones on the evolution scale, the blue modules represent more recent evolutions that relied on the already-evolved green modules, while the red modules are "program modules" that are learned during a system's lifetime and are independent of biological evolution pressures (i.e. they evolved as a result of cultural rather then biological evolutionist pressures). Also, a very important remark to be reiterated at this point is that the actuator and sensor modules need not strictly represent an acoustic interface, but may also be an interface for e.g. a written or sign language; in other words, the mechanism of language can be uncompromisingly investigated from a communication media-independent perspective.


Conclusions
The evolutionist perspective over language as presented in this note allows us to find a way of decomposing the mechanism of language into categories of modules that (could have) evolved independently because of the different kinds of evolutionist pressures they were submitted to. Based on this decomposition, the modules involved in the mechanism of language can be investigated incrementally based on the position of each module category on the evolution time scale. By determining what evolutionist pressures were applied on each category of modules we might gain new insights into what functionalities (and structures) could have emerged as a result of said pressures. Specifically, the linguistic analysis and synthesis mechanisms (coupled with the linguistic skills) can be analyzed as an evolving apparatus that (must have) started from more simple structure(s) tailored for a simpler functionality (e.g. maybe as simple and ancient as the walking sequencing "controllers"), and subsequently passed through a series of adaptations that ultimately allowed the emergence, and development, of natural language.

Autonomy of the linguistic production mechanism

The linguistic production mechanism is the mental mechanism responsible for generating the succession of mental patterns associated with linguistic constructs, while the spoken words, the gestures involved in sign languages, or even in the gestures that may accompany a spoken message, are all motor reflections of the mental patters created by this mechanism.

  • note: the "inner voice" that one can "hear" in ones' own mind is also a reflection of the linguistic mental patters produced by the linguistic production mechanism. In the special situations where we may assume that the modality of hearing is biologically unavailable (e.g. persons with prelingual deafness), but where linguistic abilities still exist (manifested e.g. in the use of a sign language), the modality of being aware of a linguistic construct in one's own mind might be different from "hearing an inner voice".

Autonomy
Okay, so in this mental experiment i'll have to think about something. What should i think about... How about roses. So, roses are a kind of flower that... No. I'll think about something else, something very common, something used very often, so that i can talk endlessly about it. A house! Well, a house is something in which one lives, and it [...]

The above is a quote from the words that i was "hearing" in my mind during an experiment meant to determine to what extent the linguistic production mechanism can "operate autonomously" from other cognitive modalities (e.g. mental images). The italic portions in the quote correspond to periods of time when i had no conscious image representation of what my inner voice was saying, while the non-italic portions correspond to periods when some kind of mental image was present in my consciousness (i.e. i was aware that i'm "seeing" a mental image). Also, when an image was starting to emerge in my mind, it was not necessarily a clear image from the beginning, e.g. when my inner voice was saying "rose" the first mental image i had was a rose petal (which is only partially representative for a rose), and only afterwards the image of the flower incrementally evolved.

The experiment described above seems to suggest that language production is an autonomous mechanism that only reflects a deeper-level thread of thought into a specific modality of representation with specific properties (i.e. the modality that we call "language", may it be verbal or of a different kind).

Another example that seems to confirm, from a different angle, the autonomy of language production from the "main thread" of thought is the following:
I am supposed to go out with someone in an hour or so, and when i look out the window i see the weather is rainy and hardly appropriate for a walk.
yeah, right, so how am i supposed to go outdoors on a weather like this

The entire phrase above is a quote of what i was thinking when i saw the weather outside, but the entire phrase represents a thought that "populated" my mind before the phrase was actually articulated in my mind. The articulation of the phrase did not contribute in any way to me becoming aware of the weather problem, nor in reflecting this problem as a thought in my mind; instead, the phrase was constructed as a reflection of what i was already thinking (i.e. the thought that i have a problem with going out because of the weather conditions).

And yet another example: i was watching a TV talkshow featuring a guest who was gesticulating quite a lot while he was doing the talking; this allowed me a rather close look into how his gestures are correlated with his words and, furthermore, i was also able to make some assumptions about how the message he was trying to convey is being serialized into his gestures.
One thing i noticed was that the succession of his gestures could have obviously been performed (and chained) at a much greater speed should they not had to be synchronized with his words. But the really interesting observation is that a gesture that was corresponding to a certain portion of the message he was verbalizing was coming before the verbalization of that portion of his message, and this was happening even if the verbalization was rather long and involved several successive words. For example, while he was saying the sentence: '[...], and the entire political scene is [...]', by the time he was pronouncing 'the' he already had his hands wide open as a gestural correlate of the expression 'the entire political scene' that was just beginning to be verbalized (if we try to reproduce ourselves this scene, it will become obvious that the wide-open-hands gesture is not a correlate of the adjective 'entire', but rather of the complete expression 'the entire political scene').
In other words, because a gesture is an obvious witness that the corresponding concept already exists in one's mind at the time the gesture is made, this example is a rather clear illustration of the linguistic production mechanism's ability to autonomously reflect via serialization the concepts that we mentally operate with.


Cognitive feeder and shuffler
The "inner voice" can, and frequently does, act as an important supply of association elements for the cognitive processes, both in the cases when the attention is, and - to a lesser extent - when it is not, focused on the inner voice. Not only does the inner voice reflect the deeper cognitive processes that take place in one's mind, but because of its serial nature it can also "interrupt" and "re-target" those processes by supplying single words, sequences of words, "sub-concepts" that emerge during the construction of a mental phrase, or even features of the inner voice (such as e.g. pitch), as individual elements that will constitute themselves new vectors of association. Moreover, the relative slowness of mental verbalization may cause a thought to "reverberate" in one's mind: although the thought process might have moved away from the concept that triggers a certain mental phrase, an element of that mental phrase (a word, a pitch, etc) may "reassert" the original thought at the time when it gets mentally verbalized.
In this way, the mechanism of linguistic production is acting as an autonomous "reshuffling" engine for the "deeper level" thought processes, and it can even interfere with the process of memorizing new data when said data is mentally verbalized and thus (partially) reorganized into already-known words (note: such an interference has even been suggested as one possible reason for the inexistence of the "photographic memory" ability in normally developed adults, based on the claim that «adults are much more likely than children to try to both verbally and visually encode a picture into memory»; however, the process might not be strictly related to language, but rather to the more general feature extraction automatism that possibly becomes increasingly prevalent towards adulthood, in which case the "linguistic decomposition" of an image is only a special case of the the feature extraction automatism).
  • for example,...
    When i started to write the sentence 'for example,...' in the line above, i was trying to find an example of how individual mental words can "interrupt" the though process and become themselves vectors of association. My inner voice then stopped at the word "example", not really knowing what to come up with next. In that moment my mind was "empty enough" to be able to re-focus on the word "example", and re-focused it did: instead of continuing to think about finding the example i was looking for, i started to think about 'what's that an example', about examples of 'examples', etc . I then realized that what just happened, i.e. my mental processes being re-targeted by the word "example" that my inner voice just said (and "got stuck" into) is in itself an example for what i was trying to show.

  • another example: i was reading an article about cells, and reached a new chapter:
    Subcellular components
    All cells, whether prokaryotic or eukaryotic, [...]

    As i finished reading the title, my mind started to analyze it: in so doing, i "got a little stuck" in the word 'component', i needed to find a better term for it (i will not address the reason for this here), and the fuzzy mental imagery associated with reading 'subcellular components', together with the other cognitive processes that took place, came up with the word 'parts' as a replacement and verbalized it in my mind. However, while i was going through the mental processes of analyzing the title, my eyes continued to read the beginning of the first phrase 'All cells, whether prokaryotic...' and my inner voice was verbalizing that phrase. The moment when the process of analyzing the title generated the word 'parts' in my mind, my reading of the first phrase was at the word 'cells', and at that moment the word 'parts' came over the word 'cells' and replaced it in what my inner voice was reading; the resulting phrase (that i was mentally verbalizing) became: 'All parts, whether prokaryotic...'.
    As a side-note, it was not at this moment that i realized what happened, but rather a bit later: as i was reading the modified phrase (with the word "cells" replaced with "parts"), i eventually reached a point where the phrase apparently became nonsensical, and it was then that i stopped and re-started reading the phrase all over again trying to understand it. It was only during this second reading of the phrase that i realized that my mind has replaced "cells" with "parts" the first time i read the phrase.

Objectives for a Strong AI cognitive model

The following checklist is neither exhaustive nor mandatory for the AI Project; instead, the list will be incrementally updated such as to reflect the project's (possibly) changing goals and perspective:

  • model the evolution of human intelligence during the stages of life, from infancy to adulthood
  • be capable of exhibiting abnormal behaviors similar to the ones found in extra-ordinary mental conditions (e.g. mental illnesses, genius abilities, savant syndrome, hyperthymesia, etc) by (gradually) altering its structure and/or parameters
  • be capable of simulating reduced cognitive abilities as found in various inferior species by (gradually) altering its structure and/or parameters
  • be capable of simulating sleep, and related episodic mental conditions (e.g. hallucinations, sleep walking, etc)
  • use biologically consistent (or plausible) structures at various levels of organization, but only insofar as the biological structures prove a near-optimal solution for a given functionality within the set of constraints of the emulation hardware. In other words, biological consistency is deemed important from a block-level functional perspective, but biologically-inconsistent models may be used for any identified functionality that needs to be emulated

Concepts as hidden knowledge

Let's imagine we're in the middle of a conversation with a friend, with a topic along the lines of:

- So, how's John been doing lately, any news?
- W
ell, i heard he just bought a new car, as for his wife, she [...]

What is it that we think of when we hear the word 'car' in the example above? Or, in other words, what was our mental representation for 'car'? Let's try to make this question more explicit:
  • do we mean 'what "mental image" do we see'? If this is the case then the answer is: we see absolutely nothing.
  • do we mean 'what "mental sound" do we hear'? Again, if this is the case then the answer is: none.
  • do we mean 'what characteristics that are all common to the concept of car are we thinking of in that moment'? And yet again, if this is the case then the answer is: none.
The interesting remark here is that although we somehow do operate with the concept of 'a car' when we hear the sentence 'he just bought a new car', the inner representation of 'a car' is apparently not tangent with any modality of sensory perception, i.e. we don't, and can't, actually have any specific sensation that would directly reflect the presence of a concept in our mind. Specifically, attempting to actually have a sensory modality-related mental representation of a concept will, in most cases, be physically and/or logically impossible: for example, we cannot possibly have an accurate visual representation of the concept 'car' for as long as this concept does not imply a specific shape (and color, and background image, etc); in other words, we cannot imagine a car that is in the same time shaped a car-model-x and car-model-y, as these two different car models have different, irreconcilable shapes.
The reason for which we can so easily talk about concepts without actually having a definition of the term (e.g. this note on the concept of concept) might be that, apart from being able to perform cognitive operations (e.g. memorizing, making associations, etc) with data that we can somehow be aware of, we can also perform similar operations with synthetic data that we are not aware of. One such kind of synthetic data is what we call 'concepts'. In other words, while concepts may indeed represent a specific kind of data that is operated upon by the cognitive processes, this special kind of data is hidden from our direct awareness and no direct sensation of any kind is biunivocally associated with it. While we can infer the fact that we operate with something-we-call-concepts from the patterns in our mental processes, we cannot actually feel the presence of a concept in our minds.

Evolutionary computation and AI engineering

Given a black-box system and a set of inputs, if one can express the system's functional goal(s) based on a measure of its outputs, i.e. Goals(Measure(Outputs(Inputs))), then evolutionary computation can grow the system's structure towards a mechanism capable of reaching the desired goal(s) (exactly to what extent evolution can build such a system, and under what conditions, is still a hot research topic). In other words, evolutionary computation seems to act as a universal behavioral to structural specification converter.

What does this mean to AI engineering?

  • first, at a strictly engineering level, using the evolutionary computation principles would allow designing an AI system by mixing structural and behavioral specifications. In this way one would eliminate the need for evolution to find a structure for those building blocks for which a structural specification has been designed, while resorting to evolutionary algorithms only in those cases when finding a structure is elusive.
  • second, pursuing a mixed structural-behavioral modeling essentially means that no more than a helping hand is given to evolution by hinting certain directions at various evolution stages of a complex system: designing a structural specification of a building block (or even of a top-level system) only helps evolution to skip the stage of finding a structure for that particular building block, but it does not help evolution any further in finding (i.e. "designing") a structure for the component sub-blocks. At one end, specifying the structure of every single building block in a system means completely eliminating the need for evolution to synthesize any structure at all, while at the other end, if no structural hints are given at all, it will be left up to evolution to find the entire inner structure of the system. In other words, a mixed structural-behavioral modeling only acts as a human-operated evolution accelerator, while being consistent with the evolutionary paradigm.

Common activities and the structure of language

How much of the structure of language can be observed when a dog buries a bone, in the form of the succession of actions he performs in order to accomplish the goal? Or how much is it present when he elaborates (and executes) the succession of actions needed to get the bone from where he buried it? An important remark to be made here is that although the dog performs a succession of actions which, taken individually, he might also perform in other occasions, the entire succession of actions can be seen as a goal-oriented activity which only has a meaning as a whole.

  • note: when we talk about the execution of an action, said action may itself consist of a succession of "sub-actions", such that a task that accomplishes a goal will ultimately consist of a serialization of a hierarchy of lower-level actions.
Our question can also be rephrased as follows: is there any "qualitative" distinction between articulating a sentence and the kind of activities described above? Isn't it maybe that the mechanism of language production is just a very evolved way of serializing concepts, very similar to the way the intention of hiding a bone is serialized into a coherent activity? In this case, the concept serialization mechanism being 'very evolved' would refer to a complex set of constraints that the process of serializing a concept into language must obey (i.e. an entire hierarchy of syntactical and grammar rules).

It is important at this point to stress that not all messaging protocols require the use of language: specifically, chaining a set of individual actions into an activity that has a meaning (e.g. a purpose) as a whole is considered to be a characteristic that differentiates linguistic production from other forms of messaging. Some rudimentary, non-linguistic forms of messaging between individuals by means of gestures (possibly including inarticulate sounds, or even articulate sounds) can be observed in many living creatures; however, in these cases one gesture (which may consists of a succession of movements) is dedicated to sending one specific message, while a succession of several gestures taken together does not send a new distinct message as compared to the individual gestures. In other words, in this case we cannot talk about a true "sign language" because chaining several signs (i.e. gestures) together does not bring any new meaning to the gesture combination as a whole (i.e. neither "sign words" nor "sign sentences" are formed).
  • note: in some situations, a message that is transmitted via one single gesture can be parameterized in vary complex ways, e.g. the case of bee dancing: the bee "dance" is a pattern of movement performed by bees inside their beehive upon returning from a food search flight, and it transmits information about the location of found food by means of several parameters of the movement pattern involved in the "dance" (e.g. the orientation of the "dance" conveys information about the direction where the food was found, the speed of the "dance" is related to the distance to the food, etc). However, this messaging method does not involve a sequence of signs, nor does it contain any hierarchy among the "dance" parameters, and thus does not exhibit some of the fundamental properties of language. If we were to look for a similarity between the bee dance and an element of natural language, we might compare it with the "nuances" of an interjection, e.g. the various ways in which "ooooh" can be pronounced, with each of these pronunciations conveying a different message (such as being surprised, disappointed, etc).

Don't read this / manifesto

What we do for ourselves dies with us. What we do for others and the world remains, and is immortal. - Albert Pine


Humanity is under attack. The establishment brings terror, suffering, and devastation where ever they choose to strike, and there seems to be next-to-nothing left to stand in their way. They have the power of the H-bomb, they conquer through invasion and infiltration, they pervert the human minds and souls through an all-out informational war waged against decency, against kindness, against awareness. They indoctrinate people with the image of a world that starts with one's self and ends little beyond one's family, a world in which nothing else should ultimately mater. They rally people to vote for them, and they would stop at nothing to prevent anyone from challenging their system. They would preach that every vote matters, but they will not say just how much it matters for each of the parties involved. They will not say how the system is designed such that they will always somehow get "elected" to fill one slot in the matrix or another. And they will not say how much it matters to the ones who vote. They will not say just how minute the chances are for someone's vote to actually swing a balance. And they will not say that what one gambles with when one votes is one's own hopes and ideals. They will not say that. They will say the system is fine, that one has choices, but they will not expose just how minute the chances are for people to actually build their own choices. They will never publicly debate anything else than what they think they can control. Instead, they will make enough noise to cover any and all voices that they deem a threat to their establishment. And the better they get at preventing people from seeing the correlation between one's actions and the actual causes that determine those actions, the more power of manipulation they have. Propaganda is their ultimate brainwashing weapon, and the whole planet is on the brink of being chained in ignorance and hopelessly enslaved.

And soon there will be no place left to hide.

This is not fiction. This is happening today. World war III - the informational war - has long begun, and the aggressors' weapon of choice is a global-scale net of ignorance that is thrown upon the entire planet. Extreme selfishness then only comes as natural in a blindfolded world, and what better way to control all them legions of people than having any one of them ready to fight any other, at the push of a button. Their button. And all in the name of one's own individual interest. Most of this war's victims don't even know they have fallen, and most of the aggression armies' soldiers don't even remember, or realize, they are yesterday's victims. Indeed, for these ignorant actors this war does not even exist, for all they know. In the end this war cannot be truly won by anyone, as the winners are no more than the last victims in a world where ignorance reigns supreme and the machine completely takes over.

But there are some on the battlefield that are still standing, some who cherish their freedom to paint the world around them in any color they choose, even if only at random. They obstinately resist to contemplating their freedoms being taken away from them. At the very foundations of their souls, they doubt both their will, and their capacity, to not interact with others in ever new, playful ways. They doubt they should, or could, keep looking the other way. For them the war is real, and they can still tell evil when they see it. They are still trying to put up a fight, most of the time not driven by any consistent "theory of a better world", but simply against the invasion of ignorance, and the extreme selfishness and lack of compassion that ignorance inevitably brings with it. Their resistance is not necessarily meant to set free the fallen ones, people who maybe no longer want to be set free (although many of those are only "wounded", but not yet "dead"), but rather to fight against an invading system in which they themselves would be doomed to becoming first the victims, and then the soldiers, of utter ignorance.


The AI Project is here to resist. It is partly visible, but for most part it's hidden. Its visible side is both a call to join, and a suggestion as to what its hidden side might look like. It will try to navigate the ruff waters of public exposure at best it can, trying to constantly define both its purposes and its means.

When Strong AI will eventually come into being, emerging from this AI Project or from anywhere else, there will be no human mind able to either understand it (if thus instructed), or to fight against it. There will be no human mind capable of devising a strategy to fight a plan laid out by a strong AI agent. It will all be about who has it and who doesn't, and who'll have it first does matter.

The quest for building Strong AI is a fight for the future shape of humanity. No more, no less. Whoever gets there first will have the power to change the world in potentially irreversible ways. They will have the power to make children betray their parents, neighbors spy on each other, best friends to engage in devastating wars, communities to self-destruct. Or they will have the power to help the ones in need. And play. Let us not forget our history, human minds can be conquered, and souls perverted, in an informational war: hitlerjugend was born this way, suicide bombers operate this way, wars work this way, the world allowed itself to be brought at the brink of self-destruction during the cuban missile crisis, the fratricide sectarian wars prove it's possible, and the world-wide biometric police is just around the corner with very few seeming to care. Should such colossal powers fall in the hands of evil, they will be used in evil ways.

Will weapons-as-we-know-them disappear, will they be rendered useless in an informational war? Will suffering and fear become unnecessary for winning such a war? Maybe, but by no means will they be gone. The supreme pleasure of evil is to be at the controls of others' suffering, and nothing short of total control will ever suffice. Weapons will get ever more powerful, as measured on just about any imaginable metric: you'll have remote-torturing weapons, mass destruction weapons, "surgical operation" weapons, you name it. But merely possessing such weapons will not bring anyone protection: in an informational war, it may well be the enemy that operates your own weapons, without you ever knowing it. It's a spy game taken to the extreme.

But there is hope.

All over the the course of history, the ability to win a war systematically required an ever increasing amount of economic power. It was the economic might that fed the armies, may they be armies of soldiers on the battlefields, armies of scientists developing the new generation of weapons, or the armies of ordinary people that just needed to be kept away from interfering with the war plans. It was the holders of economic might that were the only ones that could plan war and execute it. The more the society evolved, the harder it was for small resistance groups, increasingly disadvantaged in terms of economic power, to stand in the face of an invading army, let alone to challenge the principle of invasion itself.

But we live in an age of change, an age or a radical transformations. Knowledge, wisdom, and determination alone can now, for a brief period of time, completely overturn the balance of world power. Never before in the the modern age had small under-financed groups the means to directly send their message to entire continents. They do now. Not only do they have such means, but, with today's technology, individually customized messages can be sent to every single person in the world-wide audience. A communication vehicle that powerful has never been in the hands of anybody before the advent of the internet and the various other the modern communication tools, and now it's at the fingertips of half the planet. All that remains to be developed is the ultimate tool of persuasion, a tool that will fully utilize the technological might that is so accessible in the modern world. And it will be developed, probably sooner rather than later, but, again, the big menacing question is: who will be the ones who'll own it.

Persuasion is a matter of intelligence,
and we live in an era when super-intelligence can be built. Moreover, building such super-intelligence increasingly seems to be a matter of intelligence bootstrapping and very little else, which makes it equally approachable by both the establishment and the resistance. If indeed intelligence plus a customized communication channel to every person on the planet is all that's needed to turn the world around, then the resistance's problem these days no longer is the lack of economic resources, as very little of those would be required to pursue just about any imaginable goal; rather, it is its lack of organization and its inability to articulate a compelling alternative to be spread all over the planet. Moreover, one or both of the above are sometimes strangely viewed as virtues, or even necessities, rather than obstacles, and what one gets is the current state of affairs where the resistance is an almost invisible heterogeneous "movement" with little if any power to define itself, let alone to shake the system it deems so utterly wrong. The establishment is now vulnerable, probably for the first time in history, at its very material foundation - the human minds that devise and perpetuate its doctrine, but what exactly should be put in place, let alone how, is, to this day, anyone's guess.

The human mind is the last frontier, and history proves it can be conquered in an informational war. And strong AI technology is more than capable of winning that war. The real question now is: who will be sitting at the buttons.

Hopefully, the future is unwritten.

All content on this website is covered by the copyright policy of the AI Project.