Value system and rules

Let’s say we have a machine that understands all the high-level abstractions and patterns in data. Let’s say it has built on its own models to predict what’s coming next. Data science ends here presenting models and predictions to Entrepreneurs or executives to make a decision. But Is it intelligent at this stage? No, it has to be able to make decisions on its own to be truly intelligent.

So, what guides decision making process?

Value system.

A company makes its decision to maximize its profit. Profit is their value and that guides their decision-making process.

A man’s value of his survival deeply coded in his DNA motivates him to make proper decisions that could help him get proper food, shelter, clothes, sex etc.

Evolution has managed to deduce very rich and distinct emotions that could help us identify reinforcements in the real world. This makes it an imperative for any machine to be hard coded with these rules of value system to make use of reinforcement learning or other latest tech meme of that kind.

Whether it is chess or Go, if there is something it should definitely know before starting the game, it has to be the winning state and possibly evaluation functions for all different states possible in the Game. But that’s a small world to code valuations/reinforcements. How would you go about writing such codes for a machine to evaluate the humongous set of states, instances, objects in the real world? This makes me believe that modeling the value system or giving to a machine ability to build its own value system is at the core of AI. Too early to ignore Minsky’s rule-based systems.

Induction is key for AGI

Induction -> models -> prediction -> decision -> action

For example, finding a number in the telephone directory.

Inducts that numbers are sorted by alphabetical order of names from the data.

Models or imagines a straight line domain for all data.

Predicts at what part of the book the number can be found.

Takes decision and action to open that part.

Again infers from data and updates its predictions, decisions, and acts accordingly with the new information.

MARCUS HUTTER approach

It is not necessary for a theory to be computable in general, at least in most of the sciences. But as the Artificial intelligence sits as a branch of computer science, most of AI researchers looks for the theory of AI from the perspective of its computability. MARCUS HUTTER suggests that at least a small chunk of researchers should look for theory of intelligence without considering computational resource constraints and then we can approximate that theory to computability once we find it.

MML principle

Solomonoff’s theory of induction – Algorithmic probability and Kolmogorov complexity.

Computational combinatorics vs heuristics

In most of the AI algorithms, it is often the case that you enumerate in some sort and search through all the possible states and check whether we are at the goal state or particular constraints are being satisfied for the different combinations of states. This pretty much looks like a brute force and doesn’t seem to be intelligent though it could solve a pretty huge number of problems because of computational power we are endowed with. There would be a hard upper bound to the kind of problems we can tackle because of the computational limitation.

To make the search more efficient and more directional, we try to use human thought up heuristics into the search algorithms to guide the search. This is the crucial part of Human-machine interaction, leveraging the benefits of computational power and accuracy to the uniquely human-like heuristic intuition.

But the main question is, could we ever be able to make a computer think up its own heuristics appropriate to problem domain to guide its search?

Another question, How do we come up with heuristics in the first place? Do we develop it by copying the practices of our peers and mentors through our experience in the domain? or did we develop an innate sense of what to do when in some regular day to day chore settings because of millions of years of evolution? Is the ability to think up right heuristics is what true intelligence is?

Limits and hierarchical nature of understanding

Do you know what is anamoranistion?

I guess you don’t know, ok let me explain.

anamoranistion is a danjiar formed by joining jukione and serkolit back to back.

Do you have any idea what it is now?

No, until and unless I come down to define anamoranistion in terms of words you know, you can’t understand it.

From this, we can probably deduce empirically that part of learning a new concept is looking at it from the level of already available faculties in the mind.

Many times relational structures are used to explain new concept analogically, for example, c is to d, like what a is to b. Even here, you need to know the relation between a and b to understand the relation between c and d.

This is why learning a subject is not passive but an active endeavor where you need to investigate the new concept until you get to define it using the concepts already available in your mind. You shouldn’t expect the teacher to guess the highest stupidity in the classroom and teach from that perspective.

Sorry for the words at the beginning, they are just made up to convey my point.:)

The simplest and costly idea of prediction

Think of mind as a system that adds a new dimension to its space everytime it encounters a new variable in life.

Every time a variable changes from one value to another value, it tries to justify the change by the changes in other existing variables in its space and/or takes note of any plausible new variable that might be responsible for the change.

This way it also finds the relevant variables in different situations and contexts through experience and eventually learns to look for patterns in these relevant variables for predicting the target variable.

Architecture for AGI

This article mainly runs on following dimensions.

  1. Perceiving the state of the world and state of yourself in the world.
    1. The power of discriminating and encapsulating different material and immaterial concepts which are often indefinitely defined, fuzzy, context -based and have no rigid boundaries.
      1. Natural language processing.
      2. Abstractions- clustering problem. Identifying the need for a level of clustering for current problem domain.importance of similarity measure in clustering. Rudimentary kind of clustering is clustering based on how different objects in the world be useful to him in what ways… Like things to eat to be clustered as one, things to talk to, things to use as the tool, or which body part to be used for what object. K-means algorithm.naive Bayes inferring classes from the models and models from the classes.em algorithm.
    2. pattern recognition.
    3. proprioception and self-referential systems.
    4. object recognition (https://www.youtube.com/watch?v=5yeusVF42K4)
    5. Donald Hoffman’s theory of perception.perceptual systems tuned to fitness outcompete those that are tuned to truth/reality.
  2. Goals and priorities in them.
    1. Switching between the goals based on the timeframes and dependencies for each goal.
    2. Inbuilt goals like getting sex, food, staying alive and goals we choose for ourselves by the culture and experiences and interaction between two kinds. Ethical issues of giving a power of creating self-goals to AI systems.
    3. Decision-making.
  3. Relevance measure for objects perceived in the world to the current goal.
    1. Prioritising perception or attention based on relevance to goal.
    2. Adaptivity of attention based on the skill level(acquired by repetition of the same process)
  4. Identifying actions or a sequence of actions to impact objects for achieving our goals.(planning)
    1. The element of stochasticity or randomness in finding the sequence is necessary to be creative when the sequence is unknown or not obvious.
    2. This can happen from analogies, empathy and copying, pattern recognition.
    3. Considering optimisation in face of multiple paths to a goal.
    4. related to imagination.
  5. A parallel Emotion machine with continuous feedback of utility or happiness on your present state in the world.
    1. Decision-making.
    2. operant conditioning.
  6. Platform knowledge: what information do we have at our birth about the world we are surrounded by and how to build it for a machine?
    1. Evolution.
    2. Jealous baby.
    3. A young baby doesn’t get surprised if a teddy bear passes behind a screen and reemerges as an aeroplane, but a one-year-old does.
    4. No categorical classification at the very beginning.
    5. As a child grows, a number of neurones decreases but a number of synapses increases.
  7. Knowledge representation, search and weights for retention(what you don’t use, you lose)
    1. Brain plasticity.
    2. Repetition and skill.
    3. Priming and classical conditioning.
    4. The DRM effect
    5. Do elements containing similar structure trigger each other.(Hofstader’s Danny in the grand canyon), pattern recognition by analogy.
    6. Reconstruction of data as memory at hippocampus with already existing memory.
    7. Procedural , episodic, semantic
    8. Representation of ideas that could be independent of language, which could allow for generalized reasoning to draw conclusions from combining ideas.
  8. Imagination at the core of knowledge, emotion machine and goals.
    1. Demis hassabis research on Imagination and memory.

What this article doesn’t talk about is:

  1. Whether a machine can have qualia even though it acts like it has general intelligence.Mary’s room. How to test qualia for a machine? Is Turing test enough for that?
  2. When did the consciousness first evolved and How?
  3. Moral implications of AGI and singularity.

Organising the interaction of all the above:

Why do I think I am talking about AGI? because we should be able to deduce any human endeavour from the interactions of above dimensions, at least from the infinitely long interactions and their orderings. Connections missed in below diagram:

  1. Goals to emotion machine.

my-agi-architecture

Deep dive (one at a time):

    1. Why object recognition is difficult?
      1. Objects defined based on purpose rather than its look or structure. We need to co-ordinate with module 3 and 7 to overcome this.
      2. We need to identify the object even though the viewpoint of it changes. When viewpoint changes, we have this problem of dimension hopping in training neural networks to recognize the object.Usually, inputs of neural networks for image recognition would be pixels, but when viewpoint changes, the input at one pixel at one training instance will be same at another pixel during different training instance. This is dimension hopping.#viewpoint_invariance.
    2. True language understanding is impossible without internal modeling the world.
  1.  Goals and priorities
  2. Relevance
  3. Planning (identifying action sequences):
  4. Emotion machine and utility:
  5. Platform knowledge: Jean Piaget, a child development psychologist argued that children create a mental model of the world around them and continuously modify that model as they interact with reality and receives feedback. He called this phenomenon as Equilibration.
    1. Infant reflexes .
    2. A child brain is less focussed with more learning rate on multitude of things, whereas an adult is more focussed, having better self control but with less learning rate.
    3. It seems true that humans come into the world with some innate abilities. some examples by steven pinker,
      1. when a human and a pet are both exposed to speech, human acquires the language whereas pet doesn’t, presumably because of some innate difference between them.
      2. Sexual preferences of men and women vary.
      3. Experimental studies on identical twins who are separated at birth and examined at later stages of life showing astonishing similarities.
      4. Do we have moral sense at birth?
    4. The above discusses about some facts on human nature at birth, but our main focus is to find Is there a seed meta-program that we are endowed with at birth which makes all other learning and behavior possible? If there exists what is it? These kinds of systems are called constructivist systems (thorisson).

Existing Cognitive Architectures: Two types

  1. Uniformity first:
    1. Soar Architecture
  2. Diversity first:
    1. ACT-R : intelligent tutoring systems are its application. John Anderson.

      Autocatalytic Endogenous Reflective Architecture (AERA)

Sigma architecture:

Architecture-requisite

Deep mind(Demis hassabis):

Deep learning + Reinforcement learning

Miscellaneous:

  1. Art is partly skill and partly stochastic.
  2. Poetry at the intersection of analogy and utility.
  3. How much of General and/or human intelligence has to do with reason? If it is fully correlated with reason and morality is built on only reason, then we and AGI has no conflict of interest as more intelligence only means more morality. #moralitynintelligence
  4. Value system guiding attention, attention setting the problem domain, problem domain looking for heuristics, heuristics guiding the search path.

Glossary:

  1. Distributed representation: Many neurones are involved in representing one concept and one neurone is involved in representing many concepts.