Let’s say we have a machine that understands all the high-level abstractions and patterns in data. Let’s say it has built on its own models to predict what’s coming next. Data science ends here presenting models and predictions to Entrepreneurs or executives to make a decision. But Is it intelligent at this stage? No, it has to be able to make decisions on its own to be truly intelligent.
So, what guides decision making process?
A company makes its decision to maximize its profit. Profit is their value and that guides their decision-making process.
A man’s value of his survival deeply coded in his DNA motivates him to make proper decisions that could help him get proper food, shelter, clothes, sex etc.
Evolution has managed to deduce very rich and distinct emotions that could help us identify reinforcements in the real world. This makes it an imperative for any machine to be hard coded with these rules of value system to make use of reinforcement learning or other latest tech meme of that kind.
Whether it is chess or Go, if there is something it should definitely know before starting the game, it has to be the winning state and possibly evaluation functions for all different states possible in the Game. But that’s a small world to code valuations/reinforcements. How would you go about writing such codes for a machine to evaluate the humongous set of states, instances, objects in the real world? This makes me believe that modeling the value system or giving to a machine ability to build its own value system is at the core of AI. Too early to ignore Minsky’s rule-based systems.
This article mainly runs on following dimensions.
- Perceiving the state of the world and state of yourself in the world.
- The power of discriminating and encapsulating different material and immaterial concepts which are often indefinitely defined, fuzzy, context -based and have no rigid boundaries.
- Natural language processing.
- Abstractions- clustering problem. Identifying the need for a level of clustering for current problem domain.importance of similarity measure in clustering. Rudimentary kind of clustering is clustering based on how different objects in the world be useful to him in what ways… Like things to eat to be clustered as one, things to talk to, things to use as the tool, or which body part to be used for what object. K-means algorithm.naive Bayes inferring classes from the models and models from the classes.em algorithm.
- pattern recognition.
- proprioception and self-referential systems.
- object recognition (https://www.youtube.com/watch?v=5yeusVF42K4)
- Donald Hoffman’s theory of perception.perceptual systems tuned to fitness outcompete those that are tuned to truth/reality.
- Goals and priorities in them.
- Switching between the goals based on the timeframes and dependencies for each goal.
- Inbuilt goals like getting sex, food, staying alive and goals we choose for ourselves by the culture and experiences and interaction between two kinds. Ethical issues of giving a power of creating self-goals to AI systems.
- Relevance measure for objects perceived in the world to the current goal.
- Prioritising perception or attention based on relevance to goal.
- Adaptivity of attention based on the skill level(acquired by repetition of the same process)
- Identifying actions or a sequence of actions to impact objects for achieving our goals.(planning)
- The element of stochasticity or randomness in finding the sequence is necessary to be creative when the sequence is unknown or not obvious.
- This can happen from analogies, empathy and copying, pattern recognition.
- Considering optimisation in face of multiple paths to a goal.
- related to imagination.
- A parallel Emotion machine with continuous feedback of utility or happiness on your present state in the world.
- operant conditioning.
- Platform knowledge: what information do we have at our birth about the world we are surrounded by and how to build it for a machine?
- Jealous baby.
- A young baby doesn’t get surprised if a teddy bear passes behind a screen and reemerges as an aeroplane, but a one-year-old does.
- No categorical classification at the very beginning.
- As a child grows, a number of neurones decreases but a number of synapses increases.
- Knowledge representation, search and weights for retention(what you don’t use, you lose)
- Brain plasticity.
- Repetition and skill.
- Priming and classical conditioning.
- The DRM effect
- Do elements containing similar structure trigger each other.(Hofstader’s Danny in the grand canyon), pattern recognition by analogy.
- Reconstruction of data as memory at hippocampus with already existing memory.
- Procedural , episodic, semantic
- Representation of ideas that could be independent of language, which could allow for generalized reasoning to draw conclusions from combining ideas.
- Imagination at the core of knowledge, emotion machine and goals.
- Demis hassabis research on Imagination and memory.
What this article doesn’t talk about is:
- Whether a machine can have qualia even though it acts like it has general intelligence.Mary’s room. How to test qualia for a machine? Is Turing test enough for that?
- When did the consciousness first evolved and How?
- Moral implications of AGI and singularity.
Organising the interaction of all the above:
Why do I think I am talking about AGI? because we should be able to deduce any human endeavour from the interactions of above dimensions, at least from the infinitely long interactions and their orderings. Connections missed in below diagram:
- Goals to emotion machine.
Deep dive (one at a time):
- Why object recognition is difficult?
- Objects defined based on purpose rather than its look or structure. We need to co-ordinate with module 3 and 7 to overcome this.
- We need to identify the object even though the viewpoint of it changes. When viewpoint changes, we have this problem of dimension hopping in training neural networks to recognize the object.Usually, inputs of neural networks for image recognition would be pixels, but when viewpoint changes, the input at one pixel at one training instance will be same at another pixel during different training instance. This is dimension hopping.#viewpoint_invariance.
- True language understanding is impossible without internal modeling the world.
- Goals and priorities
- Planning (identifying action sequences):
- Emotion machine and utility:
- Platform knowledge: Jean Piaget, a child development psychologist argued that children create a mental model of the world around them and continuously modify that model as they interact with reality and receives feedback. He called this phenomenon as Equilibration.
- Infant reflexes .
- A child brain is less focussed with more learning rate on multitude of things, whereas an adult is more focussed, having better self control but with less learning rate.
- It seems true that humans come into the world with some innate abilities. some examples by steven pinker,
- when a human and a pet are both exposed to speech, human acquires the language whereas pet doesn’t, presumably because of some innate difference between them.
- Sexual preferences of men and women vary.
- Experimental studies on identical twins who are separated at birth and examined at later stages of life showing astonishing similarities.
- Do we have moral sense at birth?
- The above discusses about some facts on human nature at birth, but our main focus is to find Is there a seed meta-program that we are endowed with at birth which makes all other learning and behavior possible? If there exists what is it? These kinds of systems are called constructivist systems (thorisson).
Existing Cognitive Architectures: Two types
- Uniformity first:
- Soar Architecture
- Diversity first:
- ACT-R : intelligent tutoring systems are its application. John Anderson.
Autocatalytic Endogenous Reflective Architecture (AERA)
Deep mind(Demis hassabis):
Deep learning + Reinforcement learning
- Art is partly skill and partly stochastic.
- Poetry at the intersection of analogy and utility.
- How much of General and/or human intelligence has to do with reason? If it is fully correlated with reason and morality is built on only reason, then we and AGI has no conflict of interest as more intelligence only means more morality. #moralitynintelligence
- Value system guiding attention, attention setting the problem domain, problem domain looking for heuristics, heuristics guiding the search path.
- Distributed representation: Many neurones are involved in representing one concept and one neurone is involved in representing many concepts.