Stream API for any language looks like writing SQL.
Map is Select Columns
filter is Where
count is Count(1)
limit is LIMIT X
collect is get all result on client side
So it is very easy to map all the functions of Streams API to some part of SQL.
Object relation mapping framework like (hibernate, mybatis, JPA, Toplink,ActiveRecord etc) give good abstraction over SQL but adds lot of overhead and also does not give much control on how SQL is build and many times you have write native SQL.
ORM never made writing SQL easy and if you don't trust me then quick refresh to how code looks .
To implement any feature we have to keep switching between SQL API and non sql API, this makes code hard to maintain and many times it is not optimal also.
This problem can be solved by having library that is based on Streams API and it can generate SQL then we don't have to switch, it becomes unified programming experience.
With such library testing will become easy as source of stream can be changed on need basis like in real env it is database and it test it is in memory data structure.
In this post i will share toy example of how library will look look like.
Stream<StocksPrice> rows = stocksTable.stream();
long count = rows
Above code generates
Select Count(1) From stocks_price where volume > 1467200 AND open_price > 1108
Look at another example with Limit
Select stock_symbol,open_price,high_price,trade_date FROM stocks_price WHERE volume > 1467200 AND open_price > 1108.0 LIMIT 2
These API can also use code generation to give compile time safety like checking column names, type etc.
Streams API comes will give some other benefits like
- Parallel Execution
- Join between database data and non db data can be easily done using map.
- Allows to use pure streaming approach and this is good when dealing with huge data.
- Opens up options of generating Native optimized query because multiple phase of pipeline can be merged.
This programming model is not new , it is very common in distributed computing framework like Spark, Kafka, Flink etc.
Spark dataset is based on this approach where it generates optimized query like pushing filters to storage, reducing reads by looking at partitions, selective column read etc.
Database driver must give stream based API and this will help in reducing dependency on ORM framework.
This is very powerful programming model and opens up lots of options.
Code used in this post is available @ streams github repo.