Let’s say we have a machine that understands all the high-level abstractions and patterns in data. Let’s say it has built on its own models to predict what’s coming next. Data science ends here presenting models and predictions to Entrepreneurs or executives to make a decision. But Is it intelligent at this stage? No, it has to be able to make decisions on its own to be truly intelligent.
So, what guides decision making process?
A company makes its decision to maximize its profit. Profit is their value and that guides their decision-making process.
A man’s value of his survival deeply coded in his DNA motivates him to make proper decisions that could help him get proper food, shelter, clothes, sex etc.
Evolution has managed to deduce very rich and distinct emotions that could help us identify reinforcements in the real world. This makes it an imperative for any machine to be hard coded with these rules of value system to make use of reinforcement learning or other latest tech meme of that kind.
Whether it is chess or Go, if there is something it should definitely know before starting the game, it has to be the winning state and possibly evaluation functions for all different states possible in the Game. But that’s a small world to code valuations/reinforcements. How would you go about writing such codes for a machine to evaluate the humongous set of states, instances, objects in the real world? This makes me believe that modeling the value system or giving to a machine ability to build its own value system is at the core of AI. Too early to ignore Minsky’s rule-based systems.
Induction -> models -> prediction -> decision -> action
For example, finding a number in the telephone directory.
Inducts that numbers are sorted by alphabetical order of names from the data.
Models or imagines a straight line domain for all data.
Predicts at what part of the book the number can be found.
Takes decision and action to open that part.
Again infers from data and updates its predictions, decisions, and acts accordingly with the new information.
MARCUS HUTTER approach
It is not necessary for a theory to be computable in general, at least in most of the sciences. But as the Artificial intelligence sits as a branch of computer science, most of AI researchers looks for the theory of AI from the perspective of its computability. MARCUS HUTTER suggests that at least a small chunk of researchers should look for theory of intelligence without considering computational resource constraints and then we can approximate that theory to computability once we find it.
Solomonoff’s theory of induction – Algorithmic probability and Kolmogorov complexity.
In most of the AI algorithms, it is often the case that you enumerate in some sort and search through all the possible states and check whether we are at the goal state or particular constraints are being satisfied for the different combinations of states. This pretty much looks like a brute force and doesn’t seem to be intelligent though it could solve a pretty huge number of problems because of computational power we are endowed with. There would be a hard upper bound to the kind of problems we can tackle because of the computational limitation.
To make the search more efficient and more directional, we try to use human thought up heuristics into the search algorithms to guide the search. This is the crucial part of Human-machine interaction, leveraging the benefits of computational power and accuracy to the uniquely human-like heuristic intuition.
But the main question is, could we ever be able to make a computer think up its own heuristics appropriate to problem domain to guide its search?
Another question, How do we come up with heuristics in the first place? Do we develop it by copying the practices of our peers and mentors through our experience in the domain? or did we develop an innate sense of what to do when in some regular day to day chore settings because of millions of years of evolution? Is the ability to think up right heuristics is what true intelligence is?
Do you know what is anamoranistion?
I guess you don’t know, ok let me explain.
anamoranistion is a danjiar formed by joining jukione and serkolit back to back.
Do you have any idea what it is now?
No, until and unless I come down to define anamoranistion in terms of words you know, you can’t understand it.
From this, we can probably deduce empirically that part of learning a new concept is looking at it from the level of already available faculties in the mind.
Many times relational structures are used to explain new concept analogically, for example, c is to d, like what a is to b. Even here, you need to know the relation between a and b to understand the relation between c and d.
This is why learning a subject is not passive but an active endeavor where you need to investigate the new concept until you get to define it using the concepts already available in your mind. You shouldn’t expect the teacher to guess the highest stupidity in the classroom and teach from that perspective.
Sorry for the words at the beginning, they are just made up to convey my point.:)
Think of mind as a system that adds a new dimension to its space everytime it encounters a new variable in life.
Every time a variable changes from one value to another value, it tries to justify the change by the changes in other existing variables in its space and/or takes note of any plausible new variable that might be responsible for the change.
This way it also finds the relevant variables in different situations and contexts through experience and eventually learns to look for patterns in these relevant variables for predicting the target variable.
This article mainly runs on following dimensions.
- Perceiving the state of the world and state of yourself in the world.
- The power of discriminating and encapsulating different material and immaterial concepts which are often indefinitely defined, fuzzy, context -based and have no rigid boundaries.
- Natural language processing.
- Abstractions- clustering problem. Identifying the need for a level of clustering for current problem domain.importance of similarity measure in clustering. Rudimentary kind of clustering is clustering based on how different objects in the world be useful to him in what ways… Like things to eat to be clustered as one, things to talk to, things to use as the tool, or which body part to be used for what object. K-means algorithm.naive Bayes inferring classes from the models and models from the classes.em algorithm.
- pattern recognition.
- proprioception and self-referential systems.
- object recognition (https://www.youtube.com/watch?v=5yeusVF42K4)
- Donald Hoffman’s theory of perception.perceptual systems tuned to fitness outcompete those that are tuned to truth/reality.
- Goals and priorities in them.
- Switching between the goals based on the timeframes and dependencies for each goal.
- Inbuilt goals like getting sex, food, staying alive and goals we choose for ourselves by the culture and experiences and interaction between two kinds. Ethical issues of giving a power of creating self-goals to AI systems.
- Relevance measure for objects perceived in the world to the current goal.
- Prioritising perception or attention based on relevance to goal.
- Adaptivity of attention based on the skill level(acquired by repetition of the same process)
- Identifying actions or a sequence of actions to impact objects for achieving our goals.(planning)
- The element of stochasticity or randomness in finding the sequence is necessary to be creative when the sequence is unknown or not obvious.
- This can happen from analogies, empathy and copying, pattern recognition.
- Considering optimisation in face of multiple paths to a goal.
- related to imagination.
- A parallel Emotion machine with continuous feedback of utility or happiness on your present state in the world.
- operant conditioning.
- Platform knowledge: what information do we have at our birth about the world we are surrounded by and how to build it for a machine?
- Jealous baby.
- A young baby doesn’t get surprised if a teddy bear passes behind a screen and reemerges as an aeroplane, but a one-year-old does.
- No categorical classification at the very beginning.
- As a child grows, a number of neurones decreases but a number of synapses increases.
- Knowledge representation, search and weights for retention(what you don’t use, you lose)
- Brain plasticity.
- Repetition and skill.
- Priming and classical conditioning.
- The DRM effect
- Do elements containing similar structure trigger each other.(Hofstader’s Danny in the grand canyon), pattern recognition by analogy.
- Reconstruction of data as memory at hippocampus with already existing memory.
- Procedural , episodic, semantic
- Representation of ideas that could be independent of language, which could allow for generalized reasoning to draw conclusions from combining ideas.
- Imagination at the core of knowledge, emotion machine and goals.
- Demis hassabis research on Imagination and memory.
What this article doesn’t talk about is:
- Whether a machine can have qualia even though it acts like it has general intelligence.Mary’s room. How to test qualia for a machine? Is Turing test enough for that?
- When did the consciousness first evolved and How?
- Moral implications of AGI and singularity.
Organising the interaction of all the above:
Why do I think I am talking about AGI? because we should be able to deduce any human endeavour from the interactions of above dimensions, at least from the infinitely long interactions and their orderings. Connections missed in below diagram:
- Goals to emotion machine.
Deep dive (one at a time):
- Why object recognition is difficult?
- Objects defined based on purpose rather than its look or structure. We need to co-ordinate with module 3 and 7 to overcome this.
- We need to identify the object even though the viewpoint of it changes. When viewpoint changes, we have this problem of dimension hopping in training neural networks to recognize the object.Usually, inputs of neural networks for image recognition would be pixels, but when viewpoint changes, the input at one pixel at one training instance will be same at another pixel during different training instance. This is dimension hopping.#viewpoint_invariance.
- True language understanding is impossible without internal modeling the world.
- Goals and priorities
- Planning (identifying action sequences):
- Emotion machine and utility:
- Platform knowledge: Jean Piaget, a child development psychologist argued that children create a mental model of the world around them and continuously modify that model as they interact with reality and receives feedback. He called this phenomenon as Equilibration.
- Infant reflexes .
- A child brain is less focussed with more learning rate on multitude of things, whereas an adult is more focussed, having better self control but with less learning rate.
- It seems true that humans come into the world with some innate abilities. some examples by steven pinker,
- when a human and a pet are both exposed to speech, human acquires the language whereas pet doesn’t, presumably because of some innate difference between them.
- Sexual preferences of men and women vary.
- Experimental studies on identical twins who are separated at birth and examined at later stages of life showing astonishing similarities.
- Do we have moral sense at birth?
- The above discusses about some facts on human nature at birth, but our main focus is to find Is there a seed meta-program that we are endowed with at birth which makes all other learning and behavior possible? If there exists what is it? These kinds of systems are called constructivist systems (thorisson).
Existing Cognitive Architectures: Two types
- Uniformity first:
- Soar Architecture
- Diversity first:
- ACT-R : intelligent tutoring systems are its application. John Anderson.
Autocatalytic Endogenous Reflective Architecture (AERA)
Deep mind(Demis hassabis):
Deep learning + Reinforcement learning
- Art is partly skill and partly stochastic.
- Poetry at the intersection of analogy and utility.
- How much of General and/or human intelligence has to do with reason? If it is fully correlated with reason and morality is built on only reason, then we and AGI has no conflict of interest as more intelligence only means more morality. #moralitynintelligence
- Value system guiding attention, attention setting the problem domain, problem domain looking for heuristics, heuristics guiding the search path.
- Distributed representation: Many neurones are involved in representing one concept and one neurone is involved in representing many concepts.