Follow by Email

Sunday, 13 January 2019

How Go lang struct works

This is 3rd post of my Go lang experiment, if you are want to read about earlier post then go to

is-it-worth-learning-golang
what-are-golang-types

Struct are cool types, it allows to create user defined type.

Struct basic
Struct can be declared like this

type person struct {
   firstName string
   lastName string
}

this declares struct with 2 fields.

Struct variables can be declared like this
var p1 person

var construct will initialized p1 to Zero value, so both the string fields are set to "".

DOT (.) construct is used to access field.

How to define struct variables.
Couple of ways by which variable can be created.

var p1 person                                      // Zero value
var p2 = person{}                                  //Zero value
p3 := person{firstName: "James", lastName: "Bond"} //Proper initialization
p4 := person{firstName: "James"}                   //Partial initialization

p5 := new(person) // Using new operator , this returns pointer
p5.firstName = "James"
p5.lastName = "Bond"

Struct comparison
Same type of struct can be compared using "==" operator.

p1 := person{firstName: "James", lastName: "Bond"}
p2 := person{firstName: "James", lastName: "Bond"}


if p1 == p2 {
fmt.Println("Same person found!!!!", p1)
} else {
fmt.Println("They are different", p1, p2)
}

this shows power of pure value, no equals/hashcode type of things are required to compare, language has first class support to compare by value.

Struct conversion
Go lang does not have casting, it is supports conversion and it is applicable to any types not just struct.

Casting keep source object reference and put target object struct/layout on top of it, so in casting any changes done to source object after casting is visible to target object.
This is good for reducing memory overhead but for safety this can cause big problem because values can change magically from source object.

On other end conversion copies source value, so after conversion both source and target have no link, changing one does not impact other one. This is good for type safety and easy to reason about code.

Lets look into some conversion example of struct.

type person struct {
   firstName string
   lastName string
}

type anotherperson struct {
firstName string
lastName  string
}

Both of the above are same in structure but these two can't be assigned to each other without conversion.

p1 := person{firstName: "James", lastName: "Bond"}
anotherp1 := anotherperson{firstName: "James", lastName: "Bond"}


p1  = anotherp1 //This is compile time error
p1 = person(anotherp1)//This is allowed

Compiler is very smart to figure out that these two types are compatible and conversion is allowed.
Now if go and make change in otherperson struct like drop the field/ new field/change the order then it becomes not compatible and compiler stops this!

When it does allow conversion then it allocate new memory for target variable and copies the value.

For eg
p1 = person(anotherp1)
anotherp1.lastName = "Lee" // Will have not effect on p1


How struct are allocated

Since it is composite type and understanding memory layout of struct is very useful in knowing what type of overhead it comes up.

Current processor will do some cool things for fast & safe read/write.
Memory allocation will be aligned to word size of underlying platform ( 32 bit or 64 bit) and it will be also aligned based on size of the type for eg 4 byte value will be aligned to 4 byte address.

Alignment is very important for speed and correctness.
Lets take example to understand this, in 64 bit platform word size is 64bit or 8 byte, so it will take 1 instruction to read 1 word.

Memory Layout
Value shown in red is 2 byte and if value shown in red is allocated in 2 words(i.e at the boundary of word) then it is going to take multiple operation to read/write value and for write some kind of synchronization might be required.

Since value is only 2 byte, so it can easily fit in single word so compiler will try to allocate this in single word

Single word allocation
Above allocation is optimized for read/write. Struct allocation works on same principle.

Now lets take example of struct and see how what will be memory layout

type layouttest struct {
b  byte
v  int32
f  float64
v2 int32
}

layout of "layoutouttest" will look something like below

[ 1 X X 1 1 1 1 X ][1 1 1 1 X X X X][1 1 1 1 1 1 1 1][1 1 1 1 X X X X]

X - is for padding.
It took 4 word to place this struct and to get the alignment by data type padding is added.
If we calculate size of struct ( 1 + 4 + 4 + 8 = 17) then it should fit value in 3 word( 8*3 = 24) but it took 4 words( 8 * 4 = 32). It might look like 8 bytes are wasted.

Go gives full control to developer about memory layout, much more compact struct can be created to get to 3 word allocation.

type compactlyouttest struct {
f  float64
v  int32
v2 int32
b  byte
}

Above struct has reordered field in descending order by size it takes and this helps in getting to below memory layout

[ 1 1 1 1 1 1 1 1 ][1 1 1 1 1 1 1 1][1 X X X X X X X]

In this arrangement less space is wasted in padding and you might be tempted to use compact representation.

You should should not do this for couple of reason
 - This breaks the readability because related fields are moved all over the place.

 - Memory might not be issue, so it could be just over optimization.

 - Processor are very smart, values are read in cacheline not in word, so CPU will read multiple words and you will never see any slowness in read. You can read about how cache line works in cpu-cache-access-pattern post.

 - Over optimization can result in false sharing issue, read concurrent-counter-with-no-false-sharing to see impact of false sharing in multi threaded code.



So profile application before doing any optimization.

Go has built in packages for getting memory alignment details & other static information of types.

Below code gives lot of details about memory layout

unsafe & reflect package gives lot of internal details and looks like idea has come from java

Code used in this blog is available @ 001-struct github

Thursday, 10 January 2019

what are Golang Types


Go is strongly typed language and type is life. Language has rich types and good support for extension of type. Type provides integrity.

In this post i will share some of primitive types and how Go handles them.

Everything is 0 or 1 in computer and only these 2 values are used to represent any values we want.
Arrangement of 0 or 1 tells what is the value.


Take a example of byte value at some memory location

Binary



What is it ? You need type information .

If type is int then value is 10, if type of enum then some other value.

Type information tell us about value and size for eg if type is Boolean then it tells it is single byte value.

Information about types supported by Go can be found at Lang Spec Types  page.

How to declare variable ?

var variablename type
variablename := value // Short declaration

Both of above declare variable but the way it is initialized is very different.

Var creates and initialized with ZERO value of its type,  Zero value is very special it makes code bug free and clean! No null checks.

Zero value is based on Type so for integer type it is zero, boolean it is false , string it is empty.

Go has some type like int that gets size based on underlying architecture, for eg it will be 4 bytes(i.e 32 bit arch) or 8 bytes( 64 bit arc). This is also good example of mechanical sympathy to underlying platform.


Examples of variable declaration



Alias for built in type

This is very powerful feature and it allow built in types to be extended by adding behavior .
Example of type alias

In above example RichInt has toBinary function that returns binary value.  I will share later how to extend types when we explore methods of types.

Casting Vs Conversion
Casting is magic, it allows to convert one type to another implicitly. How many times in java you lost value when long/int casting or double/float.
Go has concept of conversion, you explicitly convert from x to y type and pay the cost of extra memory at the cost of safety.

Go lang spec has some good examples.

Some real custom types
Go lang has support for Struct type, it is pure value type , no noise of behavior attached to it.
It gives control of memory layout, you can choose really compact memory layout to avoid padding or to add padding if required.

Struct can be declared like below

type person struct {
firstName string
lastName  string
        age int
}

Once struct is defined then we can create value of struct type.
It is value not object just remember that!

value can be created using below code

var p1 person

above code create value and initialized it with zero value, string is initialized to empty value and int to 0.
No null check is required when processing p1 because it is initialized to ZERO value

Short declaration can be used to specified non zero or other value

p2 := person{firstName: "James", lastName: "Bond", age: 35}

Zero value and convenient way to creating value kills the need of having constructor or destructor in Go.
You can now start seeing power of value. No overhead of constructor/destructor/ or complex life cycle.

I know you will have question on what about special init code or clean up code that is required ?

behavior are handled very differently, we will go over that in later post.

Struct can be nested also and Zero value or short declaration works like magic!

We will create additional struct

type address struct {
address1 string
address2 string
city     string
}

type contact struct {
landLine int
mobile   int
}

type person struct {
firstName      string
lastName       string
age            int
add            address
contactDetails contact
}

p3 := person{firstName: "James", lastName: "Bond", age: 35,
add:            address{address1: "30 Wellington Square", address2: "Street 81"},
contactDetails: contact{mobile: 11119999}}

Code used in blog is available @ letsgo github repo

Is it worth learning Golang ?

I was looking for new language to learn and Go looked very good candidate. It is getting popular due to its simplicity and power.

 It is created by some of best minds of our industry
  • Robert Griesemer  - Google V8 javascript engine, Java hotspot virtual machine
  • Rob Pike -UNIX and co creator of world most popular character encoding UTF8
  • Ken Thompson - UNIX, B ,C  language and creator of  UTF8
Now we have so many language choices. 
For ease of programming people use dynamic language like Python, Ruby , Javascript etc and for safety options are C++, Java, C#, Functional or VM based lang( Scala,Clojure etc)

So it becomes like if you want ease then give up on safety or vice versa. Some of new lang came with fancy syntax to give both but made it really hard learn.

Go took very different approach(i.e still using curly braces) by keeping the syntax simple that most of the programmer can read the code and solve the hard issues like 

 - Memory management/Garbage collection.
 - Making pure value types. No abstraction on top of abstraction. Data oriented design.
 - Design for Multi core.
 - Distributed computing support.
 - Access to low level programming construct.
 - Portable to many OS.
 - Interesting module system and dependency management.
 - Very simple error handling.
 - Interesting support for OOPS.
 - Easy to read and simple mental model.  No hiding of cost like how much memory allocation or CPU processing required.

Pictures are better than thousands words, so i picked up some content from GoCon Tokyo
Efficient


Concurrency

If you want learn new programming language today then Go looks very interesting choice.
It is not perfect , read about things that community don't like @ go-is-not-good also to get idea about what is left out in Go.

I am starting to learn Go and will be sharing my experience about it.
Lets Go for it:-)

Thursday, 22 November 2018

Spark Run local design pattern

Many spark application has now become legacy application and it becomes very hard to enhance, test & run locally.

Spark has very good testing support but still many spark application is not testable.
I will share one common error that you see when try to run some old spark application.




When you see such error you have 2 option
 - Forget it that it can't be run locally and continue work with this frustration.
 - Fix it to run locally and show example of The Boy Scout Rule to your team


I will show very simple pattern that will save from such frustration.

This code is using isLocalSpark function to decided how to handle local mode and you can use any technique to make that decision like have env parameter or command line parameter or any thing else.

Once you know it is run local then create spark context based on it.

Now this code can run locally or via Spark-Submit also.

Happy Spark Testing.
Image result for i love testing

Code used in this blog is available @ runlocal repo

Sunday, 18 November 2018

Insights from Spark UI

As continuation of anatomy-of-apache-spark-job post i will share how you can use Spark UI for tuning job

I will continue with same example that was used in earlier post, new spark application will do below things

 - Read new york city parking ticket
 - Aggregation by "Plate ID" and calculate offence dates
 - Save result

DAG for this code looks like this


























This is multi stage job, so some data shuffle is required, for this sample shuffle write is 564mb and output is 461 MB.

Lets see what we can do to reduce this ?
lets take top down approach from "Stage2". First thing that comes to mind is explore compression.

Current code

New Code


New code is only enabling gzip on write, lets see what we see on spark UI

Save with Gzip






With just write encoder write went down by 70%. Now it 135Mb and it speed up the job.

Lets see what else is possible before we dive in more internals tuning

Final output looks some like below

1RA32   1       05/07/2014
92062KA 2       07/29/2013,07/18/2013
GJJ1410 3       12/07/2016,03/04/2017,04/25/2015
FJZ3486 3       10/21/2013,01/25/2014
FDV7798 7       03/09/2014,01/14/2014,07/25/2014,11/21/2015,12/04/2015,01/16/2015

Offence date is stored in raw format, it is possible to apply little encoding on this to get some more speed.

Java 8 added LocalDate to make date manipulation easy and this class comes with some handy functions, one of that is toEpocDay.
This function convert date to day from 1970 and so it means that in 4 bytes(Int) we can store upto 5K years, this seems big saving as compared to current format which is taking 10 bytes.

Code snippet with epocDay

Spark UI after this change. I have also done one more change to use KryoSerializer






This is huge improvement , Shuffle write changed from 564Mb to 409MB ( 27% better) and output from 134Mb to 124 Mb( 8% better)

Now lets go to another section on Spark UI that shows logs from executor side.
GC logs for above run shows below thing

2018-10-28T17:13:35.332+0800: 130.281: [GC (Allocation Failure) [PSYoungGen: 306176K->20608K(327168K)] 456383K->170815K(992768K), 0.0222440 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
2018-10-28T17:13:35.941+0800: 130.889: [GC (Allocation Failure) [PSYoungGen: 326784K->19408K(327168K)] 476991K->186180K(992768K), 0.0152300 secs] [Times: user=0.09 sys=0.00, real=0.02 secs]
2018-10-28T17:13:36.367+0800: 131.315: [GC (GCLocker Initiated GC) [PSYoungGen: 324560K->18592K(324096K)] 491332K->199904K(989696K), 0.0130390 secs] [Times: user=0.11 sys=0.00, real=0.01 secs]
2018-10-28T17:13:36.771+0800: 131.720: [GC (GCLocker Initiated GC) [PSYoungGen: 323744K->18304K(326656K)] 505058K->215325K(992256K), 0.0152620 secs] [Times: user=0.09 sys=0.00, real=0.02 secs]
2018-10-28T17:13:37.201+0800: 132.149: [GC (Allocation Failure) [PSYoungGen: 323456K->20864K(326656K)] 520481K->233017K(992256K), 0.0199460 secs] [Times: user=0.12 sys=0.00, real=0.02 secs]
2018-10-28T17:13:37.672+0800: 132.620: [GC (Allocation Failure) [PSYoungGen: 326016K->18864K(327168K)] 538169K->245181K(992768K), 0.0237590 secs] [Times: user=0.17 sys=0.00, real=0.03 secs]
2018-10-28T17:13:38.057+0800: 133.005: [GC (GCLocker Initiated GC) [PSYoungGen: 324016K->17728K(327168K)] 550336K->259147K(992768K), 0.0153710 secs] [Times: user=0.09 sys=0.00, real=0.01 secs]
2018-10-28T17:13:38.478+0800: 133.426: [GC (Allocation Failure) [PSYoungGen: 322880K->18656K(326144K)] 564301K->277690K(991744K), 0.0156780 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2018-10-28T17:13:38.951+0800: 133.899: [GC (Allocation Failure) [PSYoungGen: 323808K->21472K(326656K)] 582842K->294338K(992256K), 0.0157690 secs] [Times: user=0.09 sys=0.00, real=0.02 secs]
2018-10-28T17:13:39.384+0800: 134.332: [GC (Allocation Failure) [PSYoungGen: 326624K->18912K(317440K)] 599490K->305610K(983040K), 0.0126610 secs] [Times: user=0.11 sys=0.00, real=0.02 secs]
2018-10-28T17:13:39.993+0800: 134.941: [GC (Allocation Failure) [PSYoungGen: 313824K->17664K(322048K)] 600522K->320486K(987648K), 0.0111380 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]

Lets focus on one the line

2018-10-28T17:13:39.993+0800: 134.941: [GC (Allocation Failure) [PSYoungGen: 313824K->17664K(322048K)] 600522K->320486K(987648K), 0.0111380 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]

Heap before minor GC was 600MB and after that 320MB and total heap size is 987 MB.
Executor is allocated 2gb and this Spark application is not using all the memory, we can put more load on executor by send more task or bigger task.

I will reduce input partition from 270 to 100


With 270 input partition 


With 100 input partition













100 input partition looks better with around 10+% less data to shuffle.

Other tricks
Now i will share some of things that will make big difference in GC!

Code before optimization

Code after optimization

New code is doing optimized merge of set, it is adding small set to the big one and also introduced Case class.
Another optimization is in save function where it is using mapPartitions to reduce object allocation by using StringBuffer.

I used http://gceasy.io to get some GC stats.

Before code change


After code change

New code is producing less garbage for eg. 
 Total GC 126 gb vs 122 gb ( around 4% better)
 Max GC time 720ms vs 520 ms ( around 25% better)

Optimization looks promising.

All the code used in this blog is available on github repo sparkperformance

Stay tuned up for more on this.

Saturday, 10 November 2018

SQL is Stream


Stream API for any language looks like writing SQL.

Map is Select Columns
filter is Where
count is Count(1)
limit is LIMIT X
collect is get all result on client side

So it is very easy to map all the functions of Streams API to some part of SQL.

Object relation mapping framework like (hibernate, mybatis, JPA, Toplink,ActiveRecord etc) give good abstraction over SQL but adds lot of overhead and also does not give much control on how SQL is build and many times you have write native SQL.

Image result for i hate hibernate

ORM never made writing SQL easy and if you don't trust me then quick refresh to how code looks .

Sometime i feel that engineer are writing more annotation than real algorithm!

To implement any feature we have to keep switching between SQL API and non sql API, this makes code hard to maintain and many times it is not optimal also.

This problem can be solved by having library that is based on Streams API and it can generate SQL then we don't have to switch, it becomes unified programming experience.

With such library testing will become easy as source of stream can be changed on need basis like in real env it is database and it test it is in memory data structure.

In this post i will share toy example of how library will look look like.

Code Snippet

Stream<StocksPrice> rows = stocksTable.stream();
long count = rows
                .filter(Where.GT("volume", 1467200))
                .filter(Where.GT("open_price", 1108d))
                .count();

Above code generates
Select Count(1) From stocks_price where volume > 1467200 AND open_price > 1108

Look at another example with Limit

stocksTable.stream()
                .filter(Where.GT("volume", 1467200))
                .filter(Where.GT("open_price", 1108d))
                .limit(2)
                .collect(Collectors.toList());

Select stock_symbol,open_price,high_price,trade_date FROM stocks_price WHERE volume > 1467200 AND open_price > 1108.0 LIMIT 2

These API can also use code generation to give compile time safety like checking column names, type etc.

Benefits 

Streams API comes will give some other benefits like
 - Parallel Execution
 - Join between database data and non db data can be easily done using map.
- Allows to use pure streaming approach and this is good when dealing with huge data.
- Opens up options of generating Native optimized query because multiple phase of pipeline can be merged.

This programming model is not new , it is very common in distributed computing framework like Spark, Kafka, Flink etc.

Spark dataset is based on this approach where it generates optimized query like pushing filters to storage, reducing reads by looking at partitions, selective column read etc.

Conclusion

Database driver must give stream based API and this will help in reducing dependency on ORM framework.
This is very powerful programming model and opens up lots of options.

Code used in this post is available @ streams github repo.

Friday, 26 October 2018

Broken promise of Agile

AgileManifesto was written 17 years back(i.e 2001) and is it able to bring the change to industry ?



I would say yes but not is the way authors wanted.

Many consulting company made millions of $ but as software engineer i did not see the change.

How did Agile broke promise

I will put key things that authors wanted so we have some context to discuss about this

Individuals and interactions over processes and tools
Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan



This looks so good :-)

Lets start with education industry

"Attend Agile training for 2 days and you are certified Scrum master or Agile Developer"

What did people learn after these workshop ? Standup, Planning Poker, retrospective , backlog grooming , JIRA and many more things.

One thing that is missing is "Agile mindset" , no one can teach or learn this in 2 days so it is big joke that you team went to expensive training and they are Agile Team.

Lets go over main items so see how industry see main goals of Agile

  • Individuals and interactions over processes and tools
Almost all the team got this wrong, they thought we need more tools JIRA was born, industry created big ceremony(backklog groming, standup, reto etc)
I think have too many things in JIRA tell only one thing that software quality is bad or team is too slow to delivery features or product team is creating shopping list of features that they never wanted.

Now it has gone to extend that you need multiple product owner for just grooming session and scrum masters for running retrospective meeting and to add more project manager to track story points/burn down etc.

Team is spending more time on Jira board rather than talking to team. We killed individual& interactions .

How many tools or process we added ? it is count less .

  • Working software over comprehensive documentation
People got confused with this and they said i need working software + document. Now nothing got done properly and some team will come and say we are AgileWater it mean we build software fast but also use documents that is required for waterfall.
Developer endup doing more non productive work.

Working Software was also take as team is allowed to release crap in production because we are Agile.
What was meant by working software was MVP not 20% or 40% developed item, feature is done or not done there is nothing like 50% when that feature is released to production.

Team is put in so much pressure to release that they end up taking shortcut and to address that "Tech Debt" drive is required.

When software team goes to product to get fund/approval to fix all the Tech Debt then team comes and say why did you develop & deploy crap piece of software.
  • Customer collaboration over contract negotiation
This is classic customer used Agile for blackmailing or question team credential to put last minute change. This gives them licences to change the requirement any time.
Not to miss commitment that is taken inform of Story Points and if team miss that then it is expected that they need to put extra hours.

Dev team is not different they use this as excuse to build  bad quality of software.

  • Responding to change over following a plan
Agile never said that build feature without tests, architecture or no plan.
Planning is must and good architecture also but it should be Just enough to move in right direction and it is continuous architecture without end state.

Team took this like no design or architecture to response to change.

Conclusion

One of my favorite questions about sprint is

"How long is your sprint ?"
 2/3 weeks ?

"Why 2 weeks , why not 1 month or 1 year ? who decided this ?"
It is written in Agile manifesto or my manager told this or i don't know other teams are doing this .

"How long ticket sits in backlog before making to production ?"
I don't know . Check with my TPM or if some one knows they will come and tell 1 year or 3 year"


Agile was about giving team freedom to choose sprint size based on when they are ready for feedback or customer is ready to give feedback
Sprint can 1 week or 6 months but key point is you should get the feedback after that and adjust.
If customer are not in the feedback loop then go back to waterfall .

Another thing was about Software Craftmanship.
Agile project has so many non developer like Project manager, Project manager, Scrum Master, Agile coach that they don't value Craftmanship , so we developer start new conference on this and get more disconnected.

Agile was written by developer for developer but now we are out and this place is taking by non-developers .

In Agile Conference ask the question "how many developer? "
You will see less hands :-( because they are in other craftman conference.

Agile project are about project management, dates, money , time.
Manager makes sure that plan is made by them and followed by team.

Today all project are agile but they still fail, over budget, never on time.

Any process like Agile has one hidden feedback that is called "Dissatisfaction" and you need respond to that change to become better.

Our Software industry has three inevitable things
 - Degradation
 - Dysfunction
 - Expiry

Degradation ->  Maintaining,Transformation
Dysfunction -> Innovation & Challenge
Expiry  -> Creating & Starting over

Degradation ,Dysfunction & Expiry applies to people, project, team,process , strategy , organization.

Agile is no different identify the phase and create version 2 of process or find new one that works.