Welcome to the AI Project

Many years have past since Alan Turing described the now-famous Turing Test: an information system will be described as "intelligent" if, when placed in contact with a human jury via a character-based interface, it will be indistinguishable from a human operator. This test also defines what is generally referred to as "Strong AI", as opposed to "Weak AI" which only deals with some particular aspects of the ample mechanism we know as 'intelligence'.
In this context, the AI Project aims at designing an open-source human-level intelligent agent capable of passing the Turing Test. Moreover, consistently accounting for, and properly emulating, the large qualitative and quantitative variations that can be found in human mental abilities (including various kinds of extra-ordinary mental conditions, e.g. autism, hallucinations, genius abilities, etc), as well as understanding, and possibly emulating, para-intelligent behaviors (e.g. emotional, instinctive, compulsive, obsessive, fanboyant, etc) will also fall within the scope of the AI Project research effort.

Moral Considerations
While the AI Project is not grounded into any moral system, it does encourage a careful attitude with respect to its moral implications. Reaching some level of consistency in defining, or at least partially characterizing, one or more moral systems in which a Strong AI agent could operate would constitute by itself a major achievement of the AI Project effort. Should such moral systems emerge, their description will be absorbed as a key component of the project's fundamentals.
A few remarks first:
  • intelligence alone (i.e. without any moral- and/or aesthetic-analysis abilities) can solve an immense number of severe real-life problems that modern society is confronted with (examples: systemic crisis situations; medical problems; positive-science problems with philosophic implications; etc)
  • an AI agent can be designed such that a number of human intelligence parameters (as present at the current evolutionary stage) can be experimented with (examples: high-level motivations can be artificially induced; sensitivity to tiredness can be adjusted; the short-term memory limitation to half-a-dozen items can be changed, virtually unlimited long-term memory, communication and processing speed are feasible; etc)
  • a Strong AI agent would most likely be capable of carrying out rational analysis at speeds many orders of magnitude higher than any human intelligence, and may thus become totally incomprehensible by humans in terms of the methods it uses to reach its conclusions and elaborate its actions. Furthermore, whereas in the case of human societies the communication speed between individuals is a major limitation for collective reasoning, said limitation will play a much lesser role in the case interconnected AI agents. As a result, it is possible for a true comprehension barrier to emerge, if allowed, between AI agents and human beings. This potential barrier may also be un-transcendable, regardless of the potential advances in human-brain engineering.
  • an AI agent can be intentionally used by human operators, by programming high-level goals, to both increase the state of well-being of people as well as generate (what is commonly perceived as) painful situations
  • keeping a technology that requires only modest implementation resources secret (like, probably, the Strong AI technology) is most likely not a realistic long-term goal
While acknowledging the problematic of moral systems goes far beyond what can be addressed at the current stage of this project, this section only lists some seed moral considerations pertinent to a world where Strong AI agents would constitute part of the society:
  • most of today's schools of thought agree that morality essentially deals with the interaction between a human being and other biological entities
  • regulations in moral systems seem to be strongest when addressing human-to-human interaction, and they tend to relax towards animals, plants, and other forms of being
  • moral constraints are essentially meaningless for a single-individual society; as such, a moral system ranking social isolation as "good" solves almost every crucial moral problem that multi-individual societies are confronted with. At this point a distinction needs to be made between human societies and possible AI agents societies: whereas in the case of humans an individual cannot survive in complete isolation from other life forms (such that ranking an isolationist moral as "good" would apparently be equivalent with ranking human extinction as "good"), in the case of AI agents the pursue of isolation from natural life forms seems not to imply a necessary condemnation to extinction.
  • if the problem of conflicting moral systems is solvable via logical analysis, it means its solution is implicitly embedded in the Strong AI agent's innermost structure, which in turn would mean that it is within the reach of an independent Strong AI agent with no need to have it pre-programmed as an inbuilt goal.

Key Objectives

As an alternative to thoroughly defining a Strong AI agent, this section attempts to collect some of its characteristics into a list of key objectives to be pursued by the AI Project. A general framework for describing the scope of the AI Project can be derived from the following observations:
  • a Strong AI agent's entire I/O activity is guided by reprogrammable goals. A "clean" AI agent, with no programmed goal, does not perform any I/O activity at all.
  • a Strong AI agent has the potential to answer extremely complex logical questions in a time frame that would qualify those problems' solvability as "unrealistic" when addressed by "pure" human intelligence. Conversely, given a time frame for reaching a solution to a problem, it seems intuitive that a Strong AI agent could outperform human intelligence on any metric that may be used to evaluate its results
  • a Strong AI agent, as understood herein, is not intended to evaluate moral and/or aesthetic problems; instead, moral and/or aesthetic goals are to be programmable by human operators. However, if the resolution of (some) moral issues will be proven to fall within the realm of logical analysis of empirical data, then morality viewed from this perspective will be part of a Strong AI agent's scope
In this context, an open list of AI Project key objectives can be now drafted, with the remark that each of these objectives is extremely broad in nature and is only briefly outlined in this section:
  • analyzing from various moral perspectives the potential repercussions of achieving the Strong AI goal. Such an analysis aims at a continuously-adjusting definition of a metric that will be used for making incremental decisions pertinent to the ongoing work on the AI Project.
  • defining an AI agent "startup procedure". This problem has far deeper implications than it may seem at first sight. How can a system perform any action at all without taking some sort of action before entering any informational exchange with its environment (e.g. accepting some initial direct commands from an operator, etc)? This question suggests that the internal architecture of Strong AI agents will have to contain some sort of inbuilt instinctive (re)actions, with the big challenge being the (apparently) necessary behavioral biasing of the agent via architectural design decisions, which in turn opens Pandora's box of morality-related issues.
  • defining, or at least trying to characterize, a set of fundamental concepts upon which the entire AI Project effort will be based, and then using these concepts as some sort of “development threads". These concepts will probably not render themselves to a strict hierarchical organization because such a structure would most likely require natural language to be used as the metalanguage for describing the root element, if any; instead, the fundamental concepts may be described by various types of links between them, including recursive links, as well as some sort of links that would ground them in physically-observable phenomena. Obviously, novel description/ characterization methods that may emerge will also be considered.

Final Words
Despite the field of Strong AI being extremely fertile for futuristic speculations, there are some "reasonable" observations that can be (and are being) made regarding its potential. Probably the single most disruptive social impact that a Strong AI system may have is its ability to analyze, from a positive science perspective, the world as a whole. This would include thorough, consistent models for anything from living beings all the way up to entire societies and cultures; complex notions seemingly irreducible and unavailable to introspection, such as the sense of "color", "self", or just about any other sensation in itself, might be connected (though probably not completely reduced) to physical processes; social mechanisms that today are theorized in a vague, and often ideologically stained, language might reveal themselves in new lights; and maybe answers to a wide range of fundamental philosophical questions will eventually be made rationally comprehensible. Or maybe not. But, in all this 'maybe'-plagued environment, one thing seems intuitively certain: the implementation of a Strong AI agent would revolutionize mankind in a way that no other human achievement ever did before.

All content on this website is covered by the copyright policy of the AI Project.