After exploring how to link the conscious and unconscious abstractions with basic feelings, this article postulates a new model for dreams, summarizes the possibilities for computer simulation, and provides a high-level description of the initial cognition process. It is intended as a framework for systematic discussion of each component and process in the hypercognition system, sufficient to define a learning Artificial Intelligence (AI) with emotions and dreams.

HAL AI Hypercognitive Architecture
HAL AI Hypercognitive Architecture

Introduction

The first topic of this series, Integrating the Hypercognition Model of Self with Freudian and Jungian Models of Unconscious outlined the Demetriou theory of hypercognition as a model of neocortical development, then integrated it with theories of the unconscious from Freud and Jung. To summarize, Demetriou's 'hypercognition' model provides three processing stages (control, processing domains, and executive function) with six processing domains (categorical, numerical, spatial, causal, social, and linguistic) and four development phases (abstractions of 1st, 2nd, 3rd, and 4th order). It also explained a feedback loop from the executive function (introspection), through short-term memory, that combines with sensory input, then pipes into the control stage, for storing of memories and abstractions created by the executive function.

The first topic then explored how the unconscious integrates into the hypercognition model, by 'pushing down' abstractions into deeper levels.Consciousness only has access to upper abstractions. The processing domains also contain a learned abstraction of the self, which is developed though introspection. It explained how the relationship between the semiotic models, semantics, and neurology is loosely correspondent.And it explained how, due to the complexity of the neocortex in the human brain alone, a direct correspondence is not possible, and an abstracted model is required.

So far, then, I have defined:

  • A model of hypercognition containing unconsciousness
  • Its development
  • A system of abstract symbols to link perception with emotion by a series of links to deeper abstractions, which eventually connect with the glandular system.
  • A feedback loop, providing introspection, which combines with sensory input in the control stage.
  • An abstracted conception of self in the processing domains, which is defined by the same introspection process as for other abstractions, and which interacts with the executive function to define motivation

Basic Feelings

In order to discuss an Artificial Intelligence (AI) schema, eight basic emotions are here defined in terms commensurate with commonly accepted ideas in psychology.

Alertness and Drowsiness

The first stage of hypercognition, the control system, sets the extent that its input (either stimulus, introspection, or hypercognitive feedback) invokes processing by deeper connections. If the control system invokes deeper processing, glandular reaction is increased, causing the control system itself to work faster or slower.

When hypercognition is turned to consider itself (in the act of introspection), the executive function receives information as to its prior speed from the six processing domains. The self is then conscious of the amount of its own alertness or drowsiness, and this information may itself be stored as part of the memory of self (referred to as the 'synthetic unity of apperception' in Kant's Critique of Pure Reason).

The hypercognition process compares the current performance with the memory of previous performance in the synthetic unity of apperception. The executive function can decide, based on the degree of alertness or drowsiness, whether the individual needs to wake up or fall asleep. However, as we all know, it is very difficult to force oneself to wake up more, or fall asleep more, because the deeper connections from the neocortex to the glandular system are in the unconscious—deeper in abstraction that the executive function can access directly.

Alertness and drowsiness are feelings which different from the deeper emotions (such as fear, anger, hope, and love), because they are observations that consciousness makes about the performance of hypercognition itself, and not about deeper abstractions connected to glandular activity.

Fear and Anger

Of the higher-order feelings, psychologists rank fear and anger as the two most primal emotions. These properties of the 'lizard brain' control the primary 'fight-or-flight' response. More specifically, in response to these emotions, the adrenal medulla produces a hormonal cascade that results in the secretion of catecholamines, especially norepinephrine and epinephrine. These hormonal reactions are found in the oldest and most primitive species, hence their label as 'the lizard brain.' Intended to be invoked in hostile situations, the individual is continually assessing whether it is better to attack or retreat. If the individual uses hypercognition for introspection, the 'fight' response is interpreted as anger, and the 'flight' response is interpreted as 'fear.' But in fact, because the lizard brain is so low level, there is always some level of fight/flight response present, even if very low.

Perhaps it is relevant to observe that these two primary reactions are also primary in historical texts dating to the earliest days of civilization. However, detailed texts about 'anger' tend to be about action, such as fictional accounts of battle (such as Homer's Iliad). On the other hand, detailed texts about 'fear' tend to be philosophical (such as Augustine's City of God and Hobbes Leviathan).

Freeze and Fawn

When the lizard brain is invoked in preparation for fight/flight by the social processing domain of hypercognition, two additional low-level responses may occur.

During 'freeze,' there is paling, piloerection, immobility, sounds, and body language. Sometimes, in dangerous situations where the fight/flight response should be activated, unconscious activity in deeper processing is unable to provide a choice of action, accounting for people freezing in unexpected situations. If the action of freezing actually turns out to be successful, deeper processing in favor of this response may strengthen, which if increased to abnormal levels is referred to as catatonia. During 'fawn', there is playing, mating, and mutual passivity in a social situation. If these activities are unsuccessful, a negative association with social interaction is formed at a deeper level, which if extreme enough to be abnormal is referred to as paranoia.

Hate and Love

If hypercognition is turned on examining the self after the freeze response, such extreme cognitive dissonance is observed, the self cannot clearly define the emotional state. In this case, the self may decide to dub the experience as hate. Similarly, if hypercognition is turned upon the self after the fawn process, individuals may decide to dub the experience as love. However the nature of these 2nd-order emotions is more complex. The lizard brain is still assessing a fight/flight response, during freeze and fawn. The identification of the emotion is also affected by current and prior assessments of mild risks and mild aggressions. Therefore, the hypercognition system itself is already involved in deciphering which combinations of lower-level experiences are involved in the experience of have or love, and the abstractions are already unique to each individual, depending on their prior experience. At this point, the cognition system can no longer be considered entirely logical, as will be discussed in a later section.

Similarity to other Emotional Models

It transpires that the latter six emotions deduced as necessary for this model are the same as those defined by Eckman (1999). The same basis was used for the MIT project in the definition of emotional expressions for Kismet (Breazeal, 2002). Additional emotional states are defined in the multidimensional space defined by three opposing conditions.

MIT's Kismet has Similar Emotions, also based on Eckman
MIT's Kismet has Similar Emotions, also based on Eckman

The difference between Breazeal's system and this model is that this system is an essentially tabula rasa model. The intermediate emotions on the six axes are defined by experience, instead of preprogrammed.

Associating Perceptions with Emotions

In the semiotic diagram of the prior topic in this series, it was illustrated how two simple abstractions of perceptions, 'light' and 'dark,' could cause cognitive dissonance, and therefore emotions. The connections are formed while the hypercognition process abstracts prior experiences. As each abstraction is layered over prior abstractions, the history of the experience persists in deeper processing, although normally with less intensity. Nonetheless, the associations remain, after their original formation, inside the processing domains. Hence, emotions become unconsciously associated with apparently unrelated sensations, and the relationship between the deeper associations and feelings derive from experiential history.

This causes each individual to have a unique experience when examining their own emotions. Moreover, the self-apperception of emotion neither directly corresponds to the linguistic representation, nor to the learned apperceptions of the emotional states of others. The emotions themselves are derived from extremely complex experiential abstractions, which hypercognition attempts to map onto a comparatively limited lexicon in the linguistic processing domain.

The Function of Dreams in Hypercognition

The above section described a simple set of eight basic feelings: alertness and its antithesis; the continual unconscious action of the fight/flight response; the first-order freeze/fawn responses; and the second-order love/hate emotions. While awake, an individual is continuously accumulating experiences associated with the lower-order emotions. However, the individual is only aware of the lower order emotions when in introspection, and the higher order emotions require additional hypercognition loops to define.

During the day, there may not be enough time to permit new associations. Re-ordering of old associations via introspection may not be possible within the available time.

During night, while the body is performing other autonomous functions, the individual may therefore also dream. Fully conscious dreams, where the neocortex is fully active, may not be required at all times, depending on the number of significantly different experiences accumulated during the day. If there is a large backlog of processing, the individual will experience more active dreaming. Experiments have shown that even the visual cortex is active for an individual in active, conscious dreaming. So a significant percentage of the entire human brain- 30% of its organization- is involved in the dreaming activity.

Reorganization of Domain-Specific Abstractions and Self Awareness

In the hypercognition model, there are no input stimuli from the external world, but the hypercognition system has a complete feedback loop, and is capable of acting on itself, without other input, during introspection. Thus, this dream activity can be modeled as pure introspection. For the purposes of this system, dream activity may function identically to self-examination in the waking state, with one exception: dream memory when conscious.

Directly after waking from a dream state, a portion of the dream is still in the short-term memory loop (20~30 seconds). Thus, the most recently dreamed associations and visual images may be immediately available from the feedback loop, if one introspects on the dream sufficiently soon after waking up. If not, then the only portions of the dream which can be remembered are those which were associated with self-perception (requiring activation of the superego, or persona, in the psychoanalytical models). Within this extended model of hypercognition, these long-term dream memories reflect direct changes to self-perception, rather than simple changes of association of experience with deeper symbols. These changes to self perception are concrete representations from the state of the self abstraction. This is why dreams are considered so important in Jungian psychology.

Modeling Dreams: 1. Categorical Reorganization

In conventional artificial-intelligence models, the computer does not 'dream.' However, it would be possible to model the action of dreams in a computer using standard algorithms for optimization of data and logic structures.

The Perturbed Annealing Algorithm

This is a rather advanced subject, so some paragraphs explaining the algorithm are necessary first. Then it will be clear how simulated annealing with perturbation can model the dreaming process.

'Annealing' originally referred to the atomic-level reorganization that occurs when heating a metal to a high temperature, and then gradually letting it cool. When tempering steel for example, a blacksmith heats the metal until is red hot. Then the blacksmith dips the metal in water, to start the cooling. The metal continues to cool slowly after being taken out of the water. While the metal is hot, the atoms can rearrange themselves so that the bonds between the atoms are more uniform and stronger. The hotter the metal, the greater the distance that atoms can move while rearranging themselves. As the as metal cools, the atoms can move smaller and smaller distances, the result is that the atoms slowly settle into a series of small, discontinuous lattices throughout the metal object. The different sizes of the elements added to the iron, such as titanium and chromium, change the structure and shape of the lattice crystals inside the metal.

A Basic Simulated Annealing Algorithm for Improving System Order. The AI model runs it contnuously, then perturbs (suddenly jumps up) the temperature at the beginning of a dream
A Basic Simulated Annealing Algorithm for Improving System Order. The AI model runs it contnuously, then perturbs (suddenly jumps up) the temperature at the beginning of a dream

More recently, annealing is used to make highly coherent crystalline structures, such as needed in semiconductors. However, to remove all the lattice discontinuities for a more perfect crystalline structure, simply lowering the temperature gradually does not work, because some of the atoms don't have the chance to move sufficient distance, to fit in a perfect lattice, but are instead caught by partially complete lattices, away from where the atoms ideally should be.

To improve the perfection of the lattice structure, one can use 'perturbed' annealing. This refers to heating the atoms back up a little, during the cooling process, at various intervals. As the lattice starts to cool, the atoms start to settle into their ideal locations, and due to the initial random variation, some of parts of the crystal form before others. When the crystal has formed, it is more resistant to heat, so the temperature can be raised a little to help the other atoms, not yet in their ideal locations, move greater distances into the best areas for the final ideal structure. Then as the cooling continues, the atoms move smaller distances, and more localized areas of complete lattice form. By gradually cooling the material, and perturbing it with a little heat at intervals, the entire mass can form a perfect structure. This is how silicon ingots are made for semiconductor manufacture.

The process of annealing can easily be modeled with a computer. This is called 'simulated annealing.' Variables in the annealing program set the rate at which nodes, arcs, or blocks can move from an initial state into a new state. After each simulation step, modeling one tiny change in temperature, the new state of the system is compared to that which is desired, and portions of the topology are frozen when they meet the target criteria. The next simulation step then starts, allowing another small movement.

Additional variables set the frequency, amount, and duration of the perturbations. These perturbations allow the unfixed elements to rearrange themselves by a greater amount. The steps cooling steps and perturbations continue until all of the elements reach their perfect arrangement.

A Working Example: Symbolic Reorganization During Dreaming

In the previous section, it was stated that there could be insufficient time for introspection, during the day, for all the desired experiences to create optimal abstractions. Let us consider a very simple example, again with color.

One of the most interesting aspects of colored objects is that their apparent hues and shades change both in different light conditions, and depending on the hue and color of objects around them. An individual working in art, or fashion, finds these subtle differences very important, but may not have the time to organize the relative appearances of a colored object, in different contrasting settings and in different lighting conditions during the day.

So during the day, the person stores temporary images of the perceived object's color in the spatial processing domain of hypercognition, organized in the categorical domain by time of day.

When the person falls asleep, there is cognitive dissonance indicating dissatisfaction with the information about the colored object, activating a dream process. The individual then dreams of the object's apparent colors which were seen during the day. At the beginning of the dream, the individual takes two of the memories and places them back in the visual cortex, but in a different order than the times the images were stored during the day, so as to compare them. Then the executive function then reexamines the cognitive dissonance from the new order.

If the new order is satisfactory, the executive function reorganizes and reprioritizes the memories of those visual events, storing the results back into the categorical processing domain as new abstractions. If the new order still creates dissatisfying cognitive dissonance, a new dream is triggered, repeating the process for as much time is available during sleep. Subsequently, after waking up, processing in the categorical domain acts upon the apparent hue and tone of the object in various imagined conditions during dreaming, rather than by the time they were seen. The temporal order of the original observations was pushed down into the unconscious, during the dreams. The new apparent categorical order is the preferred settings for the colored object.

Modeling Symbolic Reorganization, During Dreams, with Perturbed Annealing

In order for the categorical order to be changed in the above process, many possible permutations of the stored images may be required. For example, if there are 16 different combinations of clothes that may be worn with the colored object, then considering all possible permutations to find the optimal order for all 16 images may be 2^16, or 65536 variations. During the day, there is insufficient time to compare all variations, so the executive function simply swaps neighboring images, in the temporal order they were stored. This is the 'gradual cooling' phase of the simulated annealing algorithm. During the night, it may be found that simple swapping of temporally-adjacent images was not sufficient to create a satisfactory order. At the initialization of the dream sequence, the executive function therefore 'perturbs' the order by swapping images at greater distances from each other than direct neighbors. During a single dream, the executive function goes back to swapping and comparing neighboring images only. If the result is still unsatisfactory, the executive function triggers another perturbation. While the human brain may use a different mechanism, simulated annealing with perturbation provides a straightforward model which can be implemented in an artificial-intelligence system. It is amenable to software, and when combined with the hypercognition system, allows a computer to simulate the creation of new symbolic abstractions during dreams.

Modeling Dreams: 2. Numerical and Logical Optimization

The perturbed annealing algorithm, described above, allows modeling of symbol reorganization. Symbol reorganization can change the order of symbolic abstractions in the categorical and spatial domains. However, the annealing method is limited to the ordering and arrangement of symbols, and does not enable changing the numerical and logical relationships between the symbols.

Numerical processing differs from categorical processing, Categories are collections, sequences, or hierarchies of different objects. Numerical processing, by contrast, involves repetition of identical quantities. Examples of numerical processing include musical rhythms and notes, as well as other temporal intervals, as well as mathematics. Logical optimization involves removing of redundant terms. IN symbolic representation, for example, it may be found than two abstractions have an intervening abstraction between them which is not necessary and can be removed. Optimization of these two domains is well understood in mathematics and logic by formal systems, although the brain itself may not use the ideal forms of measures and logical relationships that have been carefully defined over millennia by many rational people working together. These considerations will be discussed in later topics.

Modeling Dreams: 3. Social and Linguistic Interaction

We use language in social interactions, but social processing is in a different processing domain from language. The social abstractions on which we form judgments of moral, ethical, and legal propositions are not the same as the abstractions for the words we use to describe them. This problem has probably caused more debate than any other part of human cognition, as there are so many different opinions about how the meaning of words for social abstractions, or if in fact, the words themselves are expressions of some ideal concept. In this model, the words do not correspond directly to social abstractions, nor to emotions. While this removes part of the debate, further considerations on how meaning is attached to words for social interactions will need to be addressed in greater depth in later sections.

Emotions and Social Interactions in Dreams

The initial discussion above described the basis for eight basic emotions. Apperceptions of the self's emotions, during introspection, is one of the last stages of cognitive development as children age, and emotions connected with sex cannot resolve meaningfully until after puberty. As such, in this model, emotional perceptions of the self are stored in the social domain. When evaluating information from the processing domains, the executive function receives signals from the social domain indicating stored memories of emotional cognitions. These memories of emotions create apperceptions of cognitive dissonance that are processed and adjusted in the dream state.

Deeper memories of emotions stored in the social domain also strongly influence social interactions of a sexual nature. That is to say, the unconscious influence on sexual behavior is deeply influential. Freudian psychoanalysis intends to find these influences by inducing conscious memories of childhood trauma, especially those related to taboos. From the Jungian perspective, the social domain contains archetypal descriptions of other human beings, which are abstracted from important people known in one's life: the archetypal mother is an idealized form of maternal care; the archetypal father is an idealized form of authority; and the other archetypes, such as those found in the Hero's journey of Joseph Campbell, are based on idealized forms of one's lovers, friends, and enemies. These are some of the most complex abstractions of all those in hypercognition, and a significant portion of dreams is involved in role-playing situations with archetypes as other people within one's dream. The archetypes often appear to be people one knows in dreams, but the theory is that the dreams are not about the actual people, but rather refining one's understanding of them as archetypes. In this respect, the social interactions with archetypes in dreams tend to follow the same model of annealing with perturbation, as described above for categorical processing.

Linguistic Processing in Dreams

The last in this series of discussions on domain processing is modeling the linguistic processing. In dreams, we often try talking to the archetypal characters in our unconscious, and at some point in the dream, have the experience that the characters in our dream don't respond, and we are left with considering whether the words we said, in the dream, were those we really would wish to utter in a real-world situation. At that point, we discover the limits of our own knowledge, and the limits of what our own cognition can tell us. Then the dream ends, and the memory of the uncertainty so often persists. But before that point, we have the opportunity to experiment with words, and assess whether they produce at least the emotional state that we desire in ourselves.

Towards a Hypercognitive AI

The above discussion in this topic outlines in more or less detail how a complete computer simulation of hypercognition could be organized and each of the processing domains modeled, but does not describe how a computer could develop cognition in the same way as a human being.

Modeling the Executive Function

The above discussion describes domain processing, the feedback loop, the conception of self, as well as how abstractions affect the emotions and control stage of hypercognition. For a complete computer model, the remaining element to define is the executive function. The executive function, being the point of consciousness that chooses actions, requires one additional factor: motivation. All motivation falls into two types: avoidance and desire. The expectation of unpleasant stimuli causes avoidance, and the expectation of desirable stimuli causes desire.

Maslow's hierarchy of needs provides a basis for desire. Maslow's lower-level needs, for physiological well being and safety, are straightforward, based on biological conditions and the environment. However, Maslow's hierarchy of needs does not provide a value system for assessing the higher-level needs of love, esteem and self actualization. Eastern philosophy is much more focused on this characteristic of human behavior. Later sections will therefore derive a value system (with a meta-value system) from Eastern philosophy.

Intercomputer Interaction

Given that data storage and processing power are not limits for the complexity of desired model, the last consideration is how the computer would learn and develop a personality, without taking the many years that a human being requires. Suppose, then, that all aspects of the hypercognition system have been modeled in software, and that analogies for sensory stimulus, glandular controls, physiological needs, motor behavior, and language interaction have all been defined.

The initial stages of personal interaction (assuming that the required resources are not too much for all the programs to run on one machine) could well be simulated by the interaction of multiple computers. One computer hosts the model of the hypercognition system. A second computer provides interactions with the mother; and a third computer provides interactions with the father.

This is possible for initial learning because, during the initial stages of development, the learning system is not capable of making many abstract associations. The input necessary from from other programs modeling the parental interaction and the environment does not need to be very sophisticated at first. Note that, due to randomization in the annealing model of dreams, the abstractions formed during the development stages will be different for each instance of the child model, as will be clarified later.

A second child model should later be set up, probably on a different computer again, to create a second hypercognition system, so that, later, the two developed child personalities could then interact. Interaction between two learning systems will provide the third social archetype, after paternal and maternal archetypes. Due to different extents of learning by the two child models, their interaction will create multilevel persona. The archetypes of each child will be partially identified by associations with existing authority archetypes, and partly through associations with self perception.

This could be sufficient to simulate much of the first years of development, but as anyone who has raised children knows, there reaches a point where questions really cannot be answered by a computer.

First Cognitions

To understand the theory of how the computer would learn, the following example takes the simplest possible case: how a newborn child learns the first abstract association.

A Baby's Initial Cognitions in the Undifferrentiated State
A Baby's Initial Cognitions in the Undifferrentiated State

Consider first, before birth, the child is in a nurturing environment, continually fed. There is no need to understand what is part of the child, and what is part of the mother. Everything is essentially a pleasant, undifferentiated continuum, easy and nice. At birth, the child is suddenly thrust into an unknown environment--without food! What is likely to be the child's first cognitive development? Let's assume that the child is not mistreated, the child is kept warm, and so on. As the visual cortex is the largest system in the human brain, the baby is most likely to notice an enormous number of visual stimuli.

At first, the shades and colors would make no sense. But we know from developmental experiments that children later form top-down distinctions between colors by distinguishing large objects with bright primary colors first. Therefore the first cognitive observation of a baby is likely to be between when the lights are on, and when the lights are off, because that provides the largest difference in experience, between light and dark, to the largest system in the brain, the visual cortex. Now suppose the parent always the turns the lights on in the room while feeding the baby. The first really positive association the baby has with the external world is between food in the womb, and food when lights are on.

Therefore, a bright white light is nice.

The baby perhaps is unsure about the dark for a while, but the bright white light is definitely nice. So, it's reasonable to assume, that is one of the first abstractions formed by hypercognition: bright white light = nice. Another primary fact we know about the optical system is that it is designed to detect movement. Now the baby notices that a large moving collection of strange colors. The lower part changes color alot, but the top part has a piece that's always got the same color's and shapes: the Mother's face. Very rapidly, the baby learns that this face is one of the nicest things to see, because after the light, there is the mother's face, then food. Because seeing the mother is more immediate to feeding, a new abstraction is formed, that food is always associated with the mother. The light may be on, but if the mother is not present, there is no food. So the original association, between light and nice, is pushed down, becoming a deeper abstraction. In dreams, the baby remembers the abstraction of light being nice before anything else, but the light may no longer directly be nice by itself. The baby remembers the contrast between light and dark, and may form the first significant causal relationship of negation: light is not dark, and dark is not light, but light is nice, so therefore dark is not nice.

But more importantly, the light is no longer directly associated with food. The original, direct association is remembered as part of experience. And now the mother is directly associated with food, so the mother is nice, instead.

As the new cognition forms, the original, first association of 'light and nice' pushes down, into the unconscious. With more and more associations, the first direct association is pushed deeper and deeper down. When eventually we reach the point of considering what is divine, we associate the divine with bright, white, light---because deep in the unconscious, it was the first nice association we formed...in this example...

But what of the times that parents did not turn the light off while not in the room? Or what if the parents feed the baby in the dark? What then? Maybe a different cognition was formed first...Maybe the baby was cold and hungry...If that was noticed first as a baby, maybe that would make the person confused about the divine when they are old enough to consider the concept. Would some instead associate the divine with a womb-red, or blood-red devil instead? How could we know? Only the baby had access to those experiences directly. As the baby ages, the first associations are buried deeper and deeper in the unconscious. There is no way to know.

Nonetheless, whatever the case for any one person, the direct association of light and dark with food must be one of the earliest cognitions, pushed deep into the subliminal, far beyond that which can be known directly, in the depths of the unconscious mind.

Conclusion

Together with the other parts of this prolog, we now have the basic concepts for defining a complete AI system which has emotions and dreams. Further work is required on the details of the algorithms of each of the components, which will be described in subsequent topics.

References

  1. Breazeal, C. "Sociable Machines" (ca. 2002) http://www.ai.mit.edu/projects/sociable/emotions.html
  2. Demetriou, A. (2006) Neo-Plagetian Theories of Cognitive Development.
    https://www.academia.edu/2090948/Neo-Piagetian_theories_of_cognitive_development
  3. Demetriou, A ; Spanoudis, G.; Mouyi, A. (2006) "A Three-level Model of the Developing Mind: Functional and Neuronal Substantiation"
    https://www.academia.edu/2065614/A_Three-level_Model_of_the_Developing_Mind_Functional_and_Neuronal_Substantiation
  4. Ekman, P. (1999). "Basic Emotions". In: T. Dalgleish and M. Power (Eds.). Handbook of Cognition and Emotion. John Wiley & Sons Ltd, Sussex, UK:.http://web.archive.org/web/20101228085345/http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf
  5. Festinger, L. (1957). A Theory of Cognitive Dissonance. California: Stanford University Press.
  6. Freud, S. (1933) New Introductory Lectures on Psychoanalysis, Penguin Freud Library
  7. Freud, S. (1940) An Outline of Psycho-analysis
    http://archive.org/stream/outlineofpsychoa027934mbp/outlineofpsychoa027934mbp_djvu.txt
  8. Henderson, D.; Jacobson, S.; Johnson, A. (2002) "The Theory and Practice of Simulated Annealing."
    http://homes.ieu.edu.tr/~agokce/Courses/Chapter%208%20Theory%20and%20Practice%20of%20simulated%20Annealing.pdf
  9. Jung, C. G. (1935) Structure & Dynamics of the Psyche, The Collected Works of C. G. Jung, Volume 8. Princeton University Press
  10. Jung, C.G. (1971). Psychological Types, Collected Works, Volume 6, Princeton, N.J.: Princeton University Press

More Complete Reference List

Some have asked for some more references, I am still compiling it, but here is a start.

  1. Asch conformity experiments: http://en.wikipedia.org/wiki/Asch_conformity_experiments
  2. Baddeley, A; Sala, S. D. (1996) "Working Memory and Executive Control," Philosophical Transactions: Biological Sciences 351.1346, Executive and Cognitive Functions of the Prefrontal Cortex http://www.jstor.org/stable/3069185
  3. Breazeal, C.; Gray, J.; Berlin, M. "An Embodied Cognition Approach to Mindreading Skills for Socially Intelligent Robots" http://ijr.sagepub.com
  4. Breazeal, C.; Thomaz, A. (2005)Learning from Human Teachers with Socially Guided Exploration http://www.cc.gatech.edu/~athomaz/papers/BreazealThomaz-ICRA08-final.pdf
  5. Chomsky, N. "Tool Module: Chomsky's Universal Grammar"
  6. Cowan,N. (2010) Multiple Concurrent Thoughts: The Meaning and Developmental Neuropsychology of Working Memory Dev Neuropsychol.
  7. Cannon, W.B. (1915) "Bodily Changes in Pain, Hunger, Fear and Rage: An Account of Recent Researches into the Function of Emotional Excitement." Appleton.
  8. Davidson, D.; Suppes, P.; Siegel, S. (1957) Decision-Making: An Experimental Approach. Stanford University Press.
  9. Davidson, D (2001). Essays on Actions and Events. Oxford University Press.
  10. Demetriou, A. (2006) Neo-Plagetian Theories of Cognitive Development. https://www.academia.edu/2090948/Neo-Piagetian_theories_of_cognitive_development
  11. Demetriou, A ; Spanoudis, G.; Mouyi, A. (2006) "A Three-level Model of the Developing Mind: Functional and Neuronal Substantiation" https://www.academia.edu/2065614/A_Three-level_Model_of_the_Developing_Mind_Functional_and_Neuronal_Substantiation
  12. Festinger, L. (1957). A Theory of Cognitive Dissonance. California: Stanford University Press.
  13. Freud, S. (1933) New Introductory Lectures on Psychoanalysis, Penguin Freud Library
  14. Freud, S. (1940) An Outline of Psycho-analysis http://archive.org/stream/outlineofpsychoa027934mbp/outlineofpsychoa027934mbp_djvu.txt
  15. Haidt, J. "The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment," University of Virginia http://www.motherjones.com/files/emotional_dog_and_rational_tail.pdf
  16. Henderson, D.; Jacobson, S.; Johnson, A. (2002) "The Theory and Practice of Simulated Annealing." http://homes.ieu.edu.tr/~agokce/Courses/Chapter%208%20Theory%20and%20Practice%20of%20simulated%20Annealing.pdf
  17. Jung, C. G. (1935) Structure & Dynamics of the Psyche, The Collected Works of C. G. Jung, Volume 8. Princeton University Press
  18. Jung, C.G. (1971). Psychological Types, Collected Works, Volume 6, Princeton, N.J.: Princeton University Press
  19. Kripke, S. (1980) Naming and Necessity. Harvard University Press:
  20. Kruger and Dunning "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," Journal of Personality and Social Psychology, 77 (6): 1121–34. doi:10.1037/0022-3514.77.6.1121 http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
  21. Maslow, A.H. (1943). A theory of human motivation. Psychological Review 50 (4) 370–96. Retrieved from http://psychclassics.yorku.ca/Maslow/motivation.htm
  22. Milgram experiment: http://en.wikipedia.org/wiki/Milgram_experiment
  23. Nisbet, E.; Garrett G. (2008) "Belief in rumors Hard to Dispel: Fact checking easily undermined by images, unrelated facts," Ohio State University. http://www.comm.ohio-state.edu/kgarrett/FactcheckMosqueRumors.pdf
  24. Nuxoll, A. (2007) Enhancing Intelligent Agents with Episodic Memory http://deepblue.lib.umich.edu/bitstream/handle/2027.42/57720/anuxoll_1.pdf;jsessionid=17B337D6E41731B075E9C653EA0CF1A7?sequence=2
  25. Plato (380 BC) The Republic, Allegory of the Cave, 514a–520a
  26. Prasad, M., Perrin, A; Bezila, K; Hoffman, S.; Kindleberge, K; Manturuk K; Powers, A. (2009) "There Must Be a Reason”: Osama, Saddam, and Inferred Justification," Sociological Inquiry, 29.2, pages 142–162 http://www.sciencedaily.com/releases/2009/08/090821135020.htm
  27. Raaijmakers, J.G (1993) "The story of the two-store model of memory: past criticisms, current status, and future directions". Attention and performance. XIV (silver jubilee volume). Cambridge, MA: MIT Press
  28. Russell, Bertrand (1905) On Denoting. Mind.
  29. Sampson, G. (2005) Educating Eve: The 'Language Instinct' Debate. continuum.
  30. Stanford Prison Experiment: http://en.wikipedia.org/wiki/Stanford_prison_experiment
  31. Thorndike E.L.; Woodworth R. (1901) The Influence of Improvement in one Mental Function upon the Efficiency of Other Functions. Psychological Review, 8, 247-261.
  32. Weizenbaum, J. (1976) Computer Power and Human Reason: From Judgment To Calculation. San Francisco: W. H. Freeman