Wednesday, 30 September 2015

Wrap around design pattern in java8

Wrap around pattern is not listed in in GOF book but is very useful for problem like below

 - Loop construct for e.g do while/while/for loop
 - Stopwatch around some code.
 - Wrap checked exception with run time exception
 - Initialization and cleanup for eg Threadpool creation/destruction or file open/close etc
 - Adding context info to threads for eg request context info for logging or passing security context etc

Lot of plumbing code is required in java to do such simple things. Java8 added lamdba support and that has answer for such problems.

Using Lambda behavior can be passed as argument to any function and it is very powerful thing if you want to solve above problems.

Wrap Around
Template of wrap around function is something like below

Pre Code
Actual behavior
Post Code

WrapAround for loop



Above code is very straight forward , it has 2 functional interface one for condition & another one for block of code to execute and those 2 behavior is passed to loop function using lambda.
This allows us to introduce new construct

Lets look at some more example

WrapAround for time/stopwatch



WrapAround Closable/Exception


Java 8 has tons of feature that can make code concise and i used just one the the feature implement really useful thing.

Code used in blog is available @ github

Thursday, 24 September 2015

Performance tuning story

Recently i was doing performance tuning of application startup time, it was taking close to 30 min and cold restart was real pain.

I this blog i will share story of this performance tuning experience.

Current state of application
Application load data from database at the startup time and keeps it in memory for fast response time and to make these things interesting all loading happens using multiple threads :-)

Current loading logic is described below.











By looking at above logic it is clear that lock & database query per record must be causing problem and profiling confirmed that.
Above code went through couple of rounds of improvement before it reached to acceptable timing.

Round 1- Remove nested query

In this round per record database query was removed with one query to bring all the data required for record and then per record request were served using that master data set.

So after that changes code looks something like this.

















This gave 30% improvement , that was good starting point with little trade off of extra transient memory.

Round 2 - Reduce scope of lock

Since this code was multi threaded , so this time profiling showed hotspot on lock and way to avoid that is either remove the lock or reduce scope of lock.

Scope of lock was reduced & this allowed to break logic in 2 step

 - Read from database
 - Update cache.

Earlier database query was done after lock was acquired and with new approach it changed and that allowed all parallel request to query the database with no contention on cache.

Code looked something like this













This gave another 40% gain with little more trade off of transient memory but memory was not the issue because oracle resultset only releases memory after it is closed, so memory wise it is no significant difference.

70% of improvement was great but it has more scope, so one improvement was done to make it faster.

Round 3 - Single Writer Batch update
Now all the bottle neck was on "write to cache" step because of multiple writers and it was reduced by using Single writer doing batch update to cache.

db query reader & cache writer were connected using queue, after this change code looked something like this.
















Now lock was acquired only few time and maximum data was written using that lock, this gave around 25% gain.

With above improvement startup time was improved by 95% and it was enough to stop more experiment.

Conclusion
 - Avoid making lots of small query to database in loop.
 - Never do I/O or network call when lock is acquired.
 - Reduce scope of lock.
 - Batch expensive operation

Wednesday, 23 September 2015

Collection resize and merge policy

Collection like ArrayList/Vector are extension of plain array with no size restriction, these collection has dynamic grow/shrink behavior.

Growing/shrinking array is expensive operation and to amortized that cost different types of policy are used.

In this blog i will share some of the policy that can be used if you are building such collection.

Growing
To decided best policy to extend we need to answer question
 - When to extend collection
-  By how much

When part is easy to answer this is definitely when element are added, but main trick is in how much.
Java collection(i.e ArrayList) grow it by 50% every time, which is good option because it reduces frequent allocation & array copy, only problem with this approach is that most of the time collection is not fully filled.

There are some option to avoid wastage of space
 - If you know how much element will be required then create collection with that initial size, so that no re-sizing is required
 - Have control on how much it grows , this is easy to implement by making growth factor as parameter to collection.

Below is code snippet for Growing list


Shrink
Remove operation on collection is good hook to decided when to reduce the size and for how much some factor can be used for e.g 25%, if after remove collection is 25% free then shrink to current size.

Java ArrayList does not shrink automatically when element are removed , so this should be watched when collection is reused because it will never shrink and in may production system collections are only half filled. It will be nice to have control on shrinking when elements are deleted.

Code snippet for auto shrinking



Hybrid Growth/Shrink
Above allocation/de-allocation has couple of issue which can have big impact on performance for e.g.
 - Big array allocation
 - Copy element

Custom allocation policy can be used to overcome this problem by having one big container that has multiple slots and element are stored in it.

Element are always appended to last pre allocated slot and when it has no capacity left then create new slot and start using it, this custom allocation has few nice benefit for e.g

 - No big allocation is required
 - Zero copy overhead
 - Really big list(i.e greater than MaxInteger) can be created
 - Such type of collection is good for JDK8 collectors, because merge cost is very low.

Nothing is free this comes with some trade off
 - Access element by index can be little slow because it has to work out which slot has item

Code snippet for slots based collection.


Code used in blog is available @ github