- Think of reading as an active pursuit rather than being passive.
- Don’t get bogged down in details, try to sprint through them and look for the gist of the author.
- Read with the intention of finding out the gist and pace yourself accordingly.
- Learn to adjust your speed based on the complexity of the content.
- Skim through the content if you know what he is going to say.
- Stop vocalization during reading.
- Don’t be a word by word reader, be a phrase reader.
- Try to speed up yourself in your reading, that will help you more concentrated on the content.
- Develop your vocabulary.
- Like any other skill, more you practice, better you become. Make sure you practice with the above principles.
- Finally, Don’t consider it as an ephemeral course but as a lifelong journey of betterment.
Let’s say we have a machine that understands all the high-level abstractions and patterns in data. Let’s say it has built on its own models to predict what’s coming next. Data science ends here presenting models and predictions to Entrepreneurs or executives to make a decision. But Is it intelligent at this stage? No, it has to be able to make decisions on its own to be truly intelligent.
So, what guides decision making process?
A company makes its decision to maximize its profit. Profit is their value and that guides their decision-making process.
A man’s value of his survival deeply coded in his DNA motivates him to make proper decisions that could help him get proper food, shelter, clothes, sex etc.
Evolution has managed to deduce very rich and distinct emotions that could help us identify reinforcements in the real world. This makes it an imperative for any machine to be hard coded with these rules of value system to make use of reinforcement learning or other latest tech meme of that kind.
Whether it is chess or Go, if there is something it should definitely know before starting the game, it has to be the winning state and possibly evaluation functions for all different states possible in the Game. But that’s a small world to code valuations/reinforcements. How would you go about writing such codes for a machine to evaluate the humongous set of states, instances, objects in the real world? This makes me believe that modeling the value system or giving to a machine ability to build its own value system is at the core of AI. Too early to ignore Minsky’s rule-based systems.
Understanding customer personality from user data.
Objective: Learn the habits of the user from his digital activities. Reflect it on him for retro. predict his behavior next day.
Digital data: what apps and sites he checks with what frequency?
Data about the lifetime of a product would be very valuable to recommend another product of that category at the right time.
Suppose you have a function add with arguments a and b. We usually have certain assumptions or requirements about what format that input has to be in. In this case, it might be some numbers. what if they are strings? it will through an error. but of we give same input to a human, he would probably concatenate two strings or map each char to an integer and add the result or something of that sort.
So, input modifier is an imaginative agent which would manipulate input we have/given to fit into the function argument requirements. This would allow for more collaboration among different functionalities at our disposal and may open the door for creativity in computation. However, there would be a trade-off of little uncertainty in it.
When you give circle and square arguments to put together function, input modifier can think of it as one inside the other, one after the other and so many other possibilities. this uncertainty is inevitable even in human cognition.
Universal evaluation of each person’s impact on the world and fairly distributing wealth based on that.
Principle component analysis for text summarization.
App for updating customers with latest IPOs and their prospectus.
App for local retail – buying things based on their closeness for faster delivery.
Train a classifier to identify a student’s stage in Flow state graph in Udacity.
Model a situation as a recurrent neural network fed with a sequence where each sequence image is learned by a convolutional neural network.
Text summarization mandatory for NLP? when you read book, you constantly summarize and build a model in your brain and continue reading the book with that summary in context. So, summarizing text and feeding it back in an LSTM network can NLU more efficient.
The most important principle to be an algorithm designer is not to be content.
The master method for analyzing the running time of divide and conquer algorithms:
- Black box solution for running time of you input few parameters related to the recurrence.
- Assumptions: All recurrences are of equal size.
n- original problem size
a- number of recurrences called in each instance, rate at which subproblems proliferate. (Evil)
b- The factor by which the input size is divided for calling recurrences.
d- Polynomial exponent of the remaining work needed to merge solutions for the final solution.
b^d – Rate of work shrinkage per subproblem. (good)
Case 1; Same work at each level.
Case 2: More work at the root level.
Case 3: More work at leaf level.
Algorithm Design paradigms:
- Divide and conquer:
- write the base case.
- Using recursion to solve subproblems of a problem.
- Combine subproblems
- Store the results of subproblems in a hashmap and use them to trim other repeating recursive paths. (Dynamic programming)
- Randomization: Like in quick sort.
- Decomposition principle:
- Greedy algorithms.
- Dynamic programming.
n-num of vertices, m-num of edges
Graph partitioning: Cuts of a graph, Minimum cut
- Adjacency matrix. O(n^2)
- Adjacency lists. O(n+m)
Strongly connected components: Regions where you can go from any node A to any node B in the directed graph.
Kosaraju’s 2-pass strongly connected components algorithm:
One of the smartest and beautiful algorithms.
The structure of internet:
Further reading: Networks, crowds and markets.
arcs – Directed edges, ordered pair.
A tuple – (S,s1,A,P,R)
S – finite set of states.
s1 – initial state.
A – finite set of actions.
P – Given a state s1 and action a, what is the probability of ending up at a particular state s2? This information is provided by P. This is called a State transition probability matrix. S*A*S
R – What is the reward for taking an action a at the state s?
Pi – Policy, what action to take at which state? A mapping from state space to action space. The optimal policy is the policy with the maximum reward.
Value function – Expected reward starting from state s and following policy pi.
Bellman equations –
Ways to find an optimal policy:
Optimal policy when starting in state s is the policy whose value function is maximum starting from state s.
Think about the case where a particular policy is always better than another policy for all initial states. The first policy is greater than second policy and this is called partial ordering.
There always exists a policy such that the partial ordering of it with all other policies is greater or equal. Such a policy/policies is called optimal policy.
- Policy iteration:
- At each state, pick the action with max value function.
- you get the new policy.
- Again go to step 1 and loop until the new policy and old policy are same.
- Value iteration:
- Finding an optimal value function rather than explicit policy.
- For every iteration improve value vector.
- once you converge to a particular value vector, use it to find the optimal policy.
- This is cheap compared to policy iteration.
Reinforcement learning is Markov decision process with unknown transition model and/or reward distribution.
Agent can observe samples and induce transition model and reward distribution from that.
Uses Q-values instead of the value function.
To be continued……….
Utility: Utility of the policy at a state is what happens if we start running from that state.
Reward gives us immediate feedback but utility gives us long-term feedback. utilities allow you to take short-term negatives for long-term positives.
Credit assignment problem:
The ideal random number generator is impossible. So, we usually have a pseudo-random generator which will be seeded by a number initially and then calculated subsequent random numbers using elements like current system time etc. For debugging stochastic programs, we have random.seed() in python which will generate the same sequence of random numbers again and again as we seed it with the same number.