MongoDB : Replica-Set Vs Sharding.

April 10, 2018 Leave a comment

By sharding, you split your collection into several parts.
Replicating your database means you make mirrors of your data-set.

Replica-Set means that you have multiple instances of MongoDB which each mirror all the data of each other. A replica-set consists of one Master (also called “Primary”) and one or more Slaves (aka Secondary). Read-operations can be served by any slave, so you can increase read-performance by adding more slaves to the replica-set (provided that your client application is capable to actually use different set-members). But write-operations always take place on the master of the replica-set and are then propagated to the slaves, so writes won’t get faster when you add more slaves.

Replica-sets also offer fault-tolerance. When one of the members of the replica-set goes down, the others take over. When the master goes down, the slaves will elect a new master. For that reason it is suggested for productive deployment to always use MongoDB as a replica-set of at least three servers, two of them holding data (the third one is a data-less “arbiter” which is required for determining a new master when one of the slaves goes down).

 

Sharded Cluster means that each shard of the cluster (which can also be a replica-set) takes care of a part of the data. Each request, both reads and writes, is served by the cluster where the data resides. This means that both read- and write performance can be increased by adding more shards to a cluster. Which document resides on which shard is determined by the shard key of each collection. It should be chosen in a way that the data can be evenly distributed on all clusters and so that it is clear for the most common queries where the shard-key resides (example: when you frequently query by user_name, your shard-key should include the field user_name so each query can be delegated to only the one shard which has that document).

The drawback is that the fault-tolerance suffers. When one shard of the cluster goes down, any data on it is inaccessible. For that reason each member of the cluster should also be a replica-set.

Categories: Android, Databases, MongoDB

Java : ReentrantLock and synchronized

January 25, 2018 Leave a comment

A ReentrantLock is owned by the thread last successfully locking, but not yet unlocking it. A thread invoking lock will return, successfully acquiring the lock, when the lock is not owned by another thread. The method will return immediately if the current thread already owns the lock.

 

The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order.

 

ReentrantLock key features as per this article

 

Ability to lock interruptible.

Ability to timeout while waiting for lock.

Power to create fair lock.

API to get list of waiting thread for lock.

Flexibility to try for lock without blocking.

You can use ReentrantReadWriteLock.ReadLock, ReentrantReadWriteLock.WriteLock to further acquire control on granular locking on read and write operations.

 

 

Extended capabilities of reentrant lock include :-

The ability to have more than one condition variable per monitor. Monitors that use the synchronized keyword can only have one. This means reentrant locks support more than one wait()/notify() queue.

The ability to make the lock “fair”. “[fair] locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order.” Synchronized blocks are unfair.

The ability to check if the lock is being held.

The ability to get the list of threads waiting on the lock.

Disadvantages of reentrant locks are :-

Need to add import statement. Need to wrap lock acquisitions in a try/finally block. This makes it more ugly than the synchronized keyword. The synchronized keyword can be put in method definitions which avoids the need for a block which reduces nesting.

 

When to use :-

ReentrantLock might be more apt to use if you need to implement a thread that traverses a linked list, locking the next node and then unlocking the current node.

Synchronized keyword is apt in situation such as lock coarsening, biased locking . Those optimizations aren’t currently implemented for ReentrantLock.

Categories: Advanced, JAVA

Cloning : Deep Copy And Shallow Copy in java

January 25, 2018 Leave a comment

What is Cloning

Object cloning is the process of making copy of an object.To make copy of an object we use protected Object clone() throws CloneNotSupportedException of object class.There are two ways of creating copy of object .
1 Shallow Copy
2 Depp copy.

what is shallow copy

Basically in shallow copy if content of original or cloned object is modified then it will reflect in other .For example if i made any change in original object then it will also reflect in cloned object .And if i made any changes in cloned object then it will also reflect in Original object.

Now to get deeper in shallow copy.What happen in shallow copy so that when changes made in one object reflect in other.

class Address{
int houseNo;
String street;
public Address(int houseNo, String street) {
this.houseNo = houseNo;
this.street = street;
}
public int getHouseNo() {
return houseNo;
}
public void setHouseNo(int houseNo) {
this.houseNo = houseNo;
}
public String getStreet() {
return street;
}
public void setStreet(String street) {
this.street = street;
}

}

class User implements Cloneable{
String name;
Address address;
public User(String name, int houseNo,String street) {
this.name = name;
this.address = new Address(houseNo, street);
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Address getAddress() {
return address;
}
public void setAddress(Address address) {
this.address = address;
}
@Override
protected Object clone() throws CloneNotSupportedException {
return super.clone();
}
}
public class Employee {
public static void main(String[] args) throws CloneNotSupportedException {
User originalUserObject = new User(“Reddy”,748,”USA”) ;
User cloneUserObject = (User) originalUserObject.clone();

System.out.println(“Original Object before Modification \n Name: “+originalUserObject.name+”\n House Number :”+originalUserObject.getAddress().houseNo+”\n Street :”+originalUserObject.getAddress().street );
System.out.println(“Clone Object before Modification \n Name: “+cloneUserObject.name+”\n House Number :”+cloneUserObject.getAddress().houseNo+”\n Street :”+cloneUserObject.getAddress().street );

originalUserObject.setName(“Reddy K”);
originalUserObject.getAddress().setHouseNo(51);
originalUserObject.getAddress().setStreet(“INDIA”);

System.out.println(“\n\n”);
System.out.println(“Original Object after Modification \n Name: “+originalUserObject.name+”\n House Number :”+originalUserObject.getAddress().houseNo+”\n Street :”+originalUserObject.getAddress().street );
System.out.println(“Clone Object afert Modification \n Name: “+cloneUserObject.name+”\n House Number :”+cloneUserObject.getAddress().houseNo+”\n Street :”+cloneUserObject.getAddress().street );

}
}

what is deep copy

In deep copy if changes made in original object after creating copy of it then these changes would’t get reflect in copy .This is because original object and cloned object does’t share any reference :

In above class change overridden method implementation as below in user class :
@Override
protected Object clone() throws CloneNotSupportedException {
//Here we have created new object of user class
User user = new User(name, address.getHouseNo(), address.getStreet());
return user;
}

Categories: Android

OSGi’s Benefits

January 22, 2018 Leave a comment

Reduced Complexity – Developing with OSGi technology means developing bundles: the OSGi components. Bundles are modules. They hide their internals from other bundles and communicate through well defined services. Hiding internals means more freedom to change later. This not only reduces the number of bugs, it also makes bundles simpler to develop because correctly sized bundles implement a piece of functionality through well defined interfaces. There is an interesting blog that describes what OSGi technology did for their development process.

Reuse – The OSGi component model makes it very easy to use many third party components in an application. An increasing number of open source projects provide their JARs ready made for OSGi. However, commercial libraries are also becoming available as ready made bundles.

Real World – The OSGi framework is dynamic. It can update bundles on the fly and services can come and go. Developers used to more traditional Java see this as a very problematic feature and fail to see the advantage. However, it turns out that the real world is highly dynamic and having dynamic services that can come and go makes the services a perfect match for many real world scenarios. For example, a service could model a device in the network. If the device is detected, the service is registered. If the device goes away, the service is unregistered. There are a surprising number of real world scenarios that match this dynamic service model. Applications can therefore reuse the powerful primitives of the service registry (register, get, list with an expressive filter language, and waiting for services to appear and disappear) in their own domain. This not only saves writing code, it also provides global visibility, debugging tools, and more functionality than would have implemented for a dedicated solution. Writing code in such a dynamic environment sounds like a nightmare, but fortunately, there are support classes and frameworks that take most, if not all, of the pain out of it.

Easy Deployment – The OSGi technology is not just a standard for components. It also specifies how components are installed and managed. This API has been used by many bundles to provide a management agent. This management agent can be as simple as a command shell, a TR-69 management protocol driver, OMA DM protocol driver, a cloud computing interface for Amazon’s EC2, or an IBM Tivoli management system. The standardized management API makes it very easy to integrate OSGi technology in existing and future systems.

Dynamic Updates – The OSGi component model is a dynamic model. Bundles can be installed, started, stopped, updated, and uninstalled without bringing down the whole system. Many Java developers do not believe this can be done reliably and therefore initially do not use this in production. However, after using this in development for some time, most start to realize that it actually works and significantly reduces deployment times.

Adaptive – The OSGi component model is designed from the ground up to allow the mixing and matching of components. This requires that the dependencies of components need to be specified and it requires components to live in an environment where their optional dependencies are not always available. The OSGi service registry is a dynamic registry where bundles can register, get, and listen to services. This dynamic service model allows bundles to find out what capabilities are available on the system and adapt the functionality they can provide. This makes code more flexible and resilient to changes.

Transparency – Bundles and services are first class citizens in the OSGi environment. The management API provides access to the internal state of a bundle as well as how it is connected to other bundles. For example, most frameworks provide a command shell that shows this internal state. Parts of the applications can be stopped to debug a certain problem, or diagnostic bundles can be brought in. Instead of staring at millions of lines of logging output and long reboot times, OSGi applications can often be debugged with a live command shell.

Versioning – OSGi technology solves JAR hell. JAR hell is the problem that library A works with library B;version=2, but library C can only work with B;version=3. In standard Java, you’re out of luck. In the OSGi environment, all bundles are carefully versioned and only bundles that can collaborate are wired together in the same class space. This allows both bundle A and C to function with their own library. Though it is not advised to design systems with this versioning issue, it can be a life saver in some cases.

Simple – The OSGi API is surprisingly simple. The core API is only one package and less than 30 classes/interfaces. This core API is sufficient to write bundles, install them, start, stop, update, and uninstall them and includes all listener and security classes. There are very few APIs that provide so much functionality for so little API.

Small – The OSGi Release 4 Framework can be implemented in about a 300KB JAR file. This is a small overhead for the amount of functionality that is added to an application by including OSGi. OSGi therefore runs on a large range of devices: from very small, to small, to mainframes. It only asks for a minimal Java VM to run and adds very little on top of it.

Fast – One of the primary responsibilities of the OSGi framework is loading the classes from bundles. In traditional Java, the JARs are completely visible and placed on a linear list. Searching a class requires searching through this (often very long, 150 is not uncommon) list. In contrast, OSGi pre-wires bundles and knows for each bundle exactly which bundle provides the class. This lack of searching is a significant speed up factor at startup.

Lazy – Lazy in software is good and the OSGi technology has many mechanisms in place to do things only when they are really needed. For example, bundles can be started eagerly, but they can also be configured to only start when other bundles are using them. Services can be registered, but only created when they are used. The specifications have been optimized several times to allow for these kind of lazy scenarios that can save tremendous runtime costs.

Secure – Java has a very powerful fine grained security model at the bottom but it has turned out very hard to configure in practice. The result is that most secure Java applications are running with a binary choice: no security or very limited capabilities. The OSGi security model leverages the fine grained security model but improves the usability (as well as hardening the original model) by having the bundle developer specify the requested security details in an easily audited form while the operator of the environment remains fully in charge. Overall, OSGi likely provides one of the most secure application environments that is still usable short of hardware protected computing platforms.

Non Intrusive – Applications (bundles) in an OSGi environment are left to their own. They can use virtually any facility of the VM without the OSGi restricting them. Best practice in OSGi is to write Plain Old Java Objects and for this reason, there is no special interface required for OSGi services, even a Java String object can act as an OSGi service. This strategy makes application code easier to port to another environment.

Runs Everywhere – Well, that depends. The original goal of Java was to run anywhere. Obviously, it is not possible to run all code everywhere because the capabilities of the Java VMs differ. A VM in a mobile phone will likely not support the same libraries as an IBM mainframe running a banking application. There are two issue to take care of. First, the OSGi APIs should not use classes that are not available on all environments. Second, a bundle should not start if it contains code that is not available in the execution environment. Both of these issues have been taken care of in the OSGi specifications.

Source : www.osgi.org/Technology/WhyOSGi

 

Example : I have being using it in Adobe Experience manager ( AEM ) .

Categories: Advanced

Serverless Architecture

January 19, 2018 Leave a comment

Serverless architectures refer to applications that significantly depend on third-party services (knows as Backend as a Service or “BaaS”) or on custom code that’s run in ephemeral containers (Function as a Service or “FaaS”), the best known vendor host of which currently is AWS Lambda.

Despite the name, it does not actually involve running code without servers. The name “serverless computing” is used because the business or person that owns the system does not have to purchase, rent or provision servers or virtual machines for the back-end code to run on.

Serverless code can be used in conjunction with code written in traditional server style, such as microservices. For example, part of a web application could be written as microservices and another part could be written as serverless code. Alternatively, an application could be written that uses no provisioned servers at all, being completely serverless.

FaaS provides a platform allowing the developers to execute code in response to events without the complexity of building and maintaining the infrastructure. The 3rd party apps or services would manage the server-side logic and state.

Drawbacks of Serverless computing :

1. Problems due to third-party API system

2. Lack of operational tools

3. Architectural complexity

4. Implementation drawbacks

Categories: Advanced

Consistent Hashing : Java Example

January 4, 2018 Leave a comment

Why Hash?

Figuring out the address is called hashing, and maps that work like this under the hood are called hash tables. Memory locations are called buckets.

Problem 1: Finite Memory

Hashing just takes a key and generates a potentially infinitely large number (a hash) which is supposed to represent a memory address. Real computers, though, only provide a finite amount of memory to programs.

A direct one-to-one between the hash and the memory in your computer is impossible in most circumstances.

Problem 2: Disappearing Buckets

An ordinary hash table relies on the presence of a fixed, constant, never changing number of locations. There are times when this is not the case.

Better Solution

The problem of mimicking a hash table when the number of locations are constantly changing was exactly why consistent hashing was invented.

Consistent hashing, in a nutshell, does this:

  • Stop trying to keep one value at exactly one location. Let one location house multiple values from multiple keys.
  • Don’t number your locations consecutively. Give them effectively random numbers between 0 and infinity.
  • Don’t compute hash % number of locations. Instead, find the smallest location number greater than your key’s hash, and put it there.
  • If your hash is greater than all locations, put it in the lowest-numbered location.

The basic idea behind the consistent hashing algorithm is to hash both objects and caches using the same hash function. The reason to do this is to map the cache to an interval, which will contain a number of object hashes. If the cache is removed then its interval is taken over by a cache with an adjacent interval. All the other caches remain unchanged.

package com.raj.web;

import java.util.List;
import java.util.SortedMap;
import java.util.TreeMap;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
import com.google.common.hash.Hashing;

public class ConsistentHash{

private final int numberOfVirtualNodeReplicas;
private final SortedMap<Long, T> circle = new TreeMap<Long, T>();
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
private final Lock r = rwl.readLock();
private final Lock w = rwl.writeLock();

public ConsistentHash(int numberOfVirtualNodeReplicas, List nodes) {
 this.numberOfVirtualNodeReplicas = numberOfVirtualNodeReplicas;
          //add(nodes);
}

public synchronized void add(T node) {
w.lock();
   try {
         addNode(node);
    } finally {
     w.unlock();
   }
}
private void addNode(T node) {
 for (int i = 0; i < numberOfVirtualNodeReplicas / 4; i++) {
             byte[] digest = md5(node + “-” + i);
             for (int h = 0; h < 4; h++) {
                   circle.put(gethashKey(digest, h), node);
             }
       }
}

private void removeNode(T node) {
 for (int i = 0; i < numberOfVirtualNodeReplicas / 4; i++) {
         byte[] digest = md5(node.toString() + “-” + i);
        for (int h = 0; h < 4; h++) {
             circle.remove(gethashKey(digest, h));
         }
      }
}

public static byte[] md5(String text) {
         return Hashing.md5().hashBytes(text.getBytes()).asBytes();
}
public static long getKetamaKey(final String k) {
 byte[] digest = md5(k);
       return gethashKey(digest, 0) & 0xffffffffL;
}

public static Long gethashKey(byte[] digest, int h) {
  return ((long) (digest[3 + h * 4] & 0xFF) << 24) | ((long)
         (digest[2 + h * 4] & 0xFF) << 16) | ((long) (digest[1 + h * 4] & 0xFF) << 8) | (digest[h * 4] & 0xFF);
}
}

 

HashFunction Doc:

https://google.github.io/guava/releases/19.0/api/docs/com/google/common/hash/HashFunction.html

Categories: JAVA

Styles: CSS/SCSS/SASS Example

December 20, 2017 Leave a comment

Sass (Syntactically Awesome StyleSheets) : It has two syntax’s:

 1) SCSS (Sassy CSS):
 2) and an older, original: indent syntax, original Sass.

So they are both part of Sass pre-processor with two different possible syntax’s.

The most important difference between SCSS and original Sass:

SCSS:

  • It syntax is similar to CSS.
  • Use braces {}.
  • Use semi-colons ;.
  • Variable sign in SCSS is $.
  • Assignment sign in SCSS is :.
  • Files using this syntax have the .scss extension.

Original Sass:

  • It syntax is similar to Ruby.
  • Here no uses of braces.
  • No strict indentation.
  • No semi-colons.
  • Variable sign in Sass is ! instead of $.
  • Assignment sign in Sass is = instead of :.

Files using this syntax have the .sass extension.

Example :

Sass: +mixinname()
Scss: @include mixinname()

Lets Do small example how this all works end to end :

In html file insert the .css files which was generated by the .scss file with the help of third party tool like koala,Gulp.

Gulp and Sass :

Step 1:

Install node.js ->Node.js

Step-2 : 

If you have not installed gulp before, have to make sure you install it globally first. Start by running npm install -g gulp, once that has completed we need to add it to our project, type npm install –save-dev gulp. This will install gulp and the –save-dev will add gulp as a devDependency in your package.json file. We also need to do the same for gulp-sass, in total these are the commands we need to run:

Goto Command prompt and run below commands :

npm init //This will run through creating the package.json file
npm install -g gulp //If you haven’t installed gulp globally before
npm install –save-dev gulp
npm install –save-dev gulp-sass

Step-3 :Sample Project setup:

– index.html
–sass
– style.scss
— css
package.json
Gulpfile.js

Step 4: Gulpfile setup :

Lets get into actually writing the JS. Inside the Gulpfile.js we need to do a few things. First we need to get access to gulp, and then we have to actually set up the task that will run and compile the sass to css.

First we add gulp.task() this is how we define our tasks in gulp. This method takes two arguments, the name of the task, and a callback function to run the actual task.we use the .pipe() method to pass along anything from the .src() inside of this method we use the imported sass module to compile our sass.

var gulp = require(‘gulp’);
var sass = require(‘gulp-sass’);

gulp.task(‘styles’, function() {
gulp.src(‘sass/**/*.scss’)
.pipe(sass().on(‘error’, sass.logError))
.pipe(gulp.dest(‘./css/’));
});

Step 5: HTML file :

Inside index.html have below sample code :

 

hello

Step 6 :scss file : Inside styles.scss 

@mixin cover {
$color: red;
@for $i from 1 through 5 {
&.bg-cover#{$i} { background-color: adjust-hue($color, 15deg * $i) }
}
}
.wrapper { @include cover }

Step 7: Running the gulpfile

In order to run our gulp we simply go to the terminal and type gulp styles where ‘styles’ is the name of that task we created!

> gulp styles

Step 8: Testing :

ALL Done , Open the html file and see the styles applied to DOM elements .

You can check the generated css file based on styles.scss inside css folder .


								
Categories: Android, CSS3, HTML5

Oracle Vs MongoDB

October 12, 2017 Leave a comment

Difference between mongoDB and RDBMS Databases :

ORACLE(RDBMS) mongoDB
Table Table consists of rows and columns. Table is similar to collection in mongoDB where data is stored in fields made up of key-value pair.
Row In Oracle, the row represents a single implicit record. It is structured with pre-defined column names. Similar to row in RDMS the records in mongoDB is stored in documents.
Column A set of data values in RDMS is called column. Column in mongoDB is denoted by Field.
Normalization Basically normalization is the best practice in RDMS as it prevents data redundancy and maintains integrity. Normalization is not required in mongoDB as it attains flexibility due to its key-value pair structure.
Structure Oracle consists of Table-Column-Row (TCR) structure. mongoDB is equivalent to Classes and Objects (CO) structure.
Joins To get the complete view of data based on business requirements joins between multiple tables are inevitable in Oracle. In mongoDB the data is stored in a single collection, but separated by using embedded documents. Concept of join is not available in mongoDB.
Schema Schemas are pre-defined in Oracle. mongoDB have dynamic schema which is best suited for unstructured data.
Primary Key Any column or set of columns can be defined as primary key. Have _id as default key that serves as primary key.
Scalability Oracle RDMS database is vertically scalable. mongoDB is scaled horizontally.
Application Oracle is best suited for query intensive environment where data is required to be structured before using it. mongoDB is best suited for unstructured data that is available in today’s world in the form of social media and multiple other sources.

 

Categories: Android, Databases, MongoDB, Oracle

JQuery Ajax Post: send input array to servlet

September 27, 2017 Leave a comment

 Sample example to send array as input param in Jquery post :

Client side Code:

function sendjsonarraytoservlet(){
var selected = [];
var rows = $(‘#tt’).datagrid(‘getSelections’);
for(var i=0; i<rows.length; i++){

selected.push(rows[i].fieldname1);
selected.push(rows[i].fieldname2);
selected.push(rows[i].fieldname3);
selected.push(rows[i].fieldname4);
selected.push(rows[i].fieldname5);

            }
 $.post(‘/context/restmethod’, { “arrayparams”: JSON.stringify(selected)},

function(returnedData){
         console.log(returnedData);
}).fail(function(){
     console.log(“error”);
});

}


Server Side Code :
String inputarray =request.getParameter(“arrayparams”);
JSONArray subArray = (JSONArray) JSONSerializer.toJSON(inputarray);

for (int i = 0; i < subArray.size(); i++) {

         String item = subArray.getString(i);

        System.out.println(“Ouput–>”+item);

}

Java : Singleton Design Pattern

September 16, 2017 Leave a comment
Singleton design pattern example :
public class SingleObject {
   private static SingleObject instance = new SingleObject();
   private SingleObject(){}
   public static SingleObject getInstance(){
      return instance;
   }
}
Categories: Android