Do you have a spark demp

since there are compactions during insert data,it will take a lot of memory
use your own bulkload method also take a lot of memory
I want to know do you have a demo use spark for bulkload? or java mapreduce to create sst files, then I can use my cluster for loading data? or I must translate you bulk load project to java hadoop ?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.