Dependency injection

dependency injection1.PNG

What is dependency injection?

Instead of initiating the object we are dependent on, we take help of a dependency framework which will push the ready made object to the dependent class during runtime.

In the above picture we are not initializing hotDrink with new hotDrink(), instead we are getting it as a constructor parameter.

Why is it useful?

This is useful to decouple two different code packages and remove direct dependency.

If you have an interface which can have multiple implementations, with the dependency injection framework, you can choose to run which implementation to run at runtime. Suppose you want to isolate and test a particular package, you can provide mock implementations of all other interfaces it is dependent on.

If you have to do the same without DI injection and interfaces, then you need to change code in lot of places, calling mock object at all the places you were calling original object to test.

It is useful when you have dependencies depending on other dependencies and so on.

Implementing with Google Guice:

Guice diGuice di2

You should usually initialize the injector where you bootstrap the program.

bind or guice configure() helps you define which implementation of interface to use.

@implementedby annotation can be used instead of configure() and bind.

@implementedby can be used as a default one.

If both guice module(bind) and @implementedby compete for different implementations of same interface, guice module wins.


 

Wanna have conditional logic to pick implementations?

Providers


Dependency injection without interface:

 

Advertisements

Design and Analysis of algorithms

The most important principle to be an algorithm designer is not to be content.

Asymptotic analysis:

O

Omega

Theta

The master method for analyzing the running time of divide and conquer algorithms:

  1. Black box solution for running time of you input few parameters related to the recurrence.
  2. Assumptions: All recurrences are of equal size.
  3. Master method

n- original problem size

a- number of recurrences called in each instance, rate at which subproblems proliferate. (Evil)

b- The factor by which the input size is divided for calling recurrences.

d- Polynomial exponent of the remaining work needed to merge solutions for the final solution.

b^d – Rate of work shrinkage per subproblem. (good)

Case 1; Same work at each level.

Case 2: More work at the root level.

Case 3: More work at leaf level.

Algorithm Design paradigms:

  1. Divide and conquer:
    1. write the base case.
    2. Using recursion to solve subproblems of a problem.
    3. Combine subproblems
    4. Store the results of subproblems in a hashmap and use them to trim other repeating recursive paths. (Dynamic programming)
  2. Randomization: Like in quick sort.
    1. Decomposition principle:
  3. Greedy algorithms.
  4. Dynamic programming.

Problems:

Graphs:

n-num of vertices, m-num of edges

Graph partitioning: Cuts of a graph, Minimum cut

Representation:

  1. Adjacency matrix. O(n^2)
  2. Adjacency lists. O(n+m)

Strongly connected components: Regions where you can go from any node A to any node B in the directed graph.

Kosaraju’s 2-pass strongly connected components algorithm:

One of the smartest and beautiful algorithms.

The structure of internet:

Structure of internet.PNG

Further reading: Networks, crowds and markets.

 

Glossary:

arcs – Directed edges, ordered pair.