Understanding customer personality from user data.
Objective: Learn the habits of the user from his digital activities. Reflect it on him for retro. predict his behavior next day.
Digital data: what apps and sites he checks with what frequency?
Data about the lifetime of a product would be very valuable to recommend another product of that category at the right time.
Suppose you have a function add with arguments a and b. We usually have certain assumptions or requirements about what format that input has to be in. In this case, it might be some numbers. what if they are strings? it will through an error. but of we give same input to a human, he would probably concatenate two strings or map each char to an integer and add the result or something of that sort.
So, input modifier is an imaginative agent which would manipulate input we have/given to fit into the function argument requirements. This would allow for more collaboration among different functionalities at our disposal and may open the door for creativity in computation. However, there would be a trade-off of little uncertainty in it.
When you give circle and square arguments to put together function, input modifier can think of it as one inside the other, one after the other and so many other possibilities. this uncertainty is inevitable even in human cognition.
Universal evaluation of each person’s impact on the world and fairly distributing wealth based on that.
Principle component analysis for text summarization.
App for updating customers with latest IPOs and their prospectus.
App for local retail – buying things based on their closeness for faster delivery.
Train a classifier to identify a student’s stage in Flow state graph in Udacity.
Model a situation as a recurrent neural network fed with a sequence where each sequence image is learned by a convolutional neural network.
Text summarization mandatory for NLP? when you read book, you constantly summarize and build a model in your brain and continue reading the book with that summary in context. So, summarizing text and feeding it back in an LSTM network can NLU more efficient.