scala - How to use spark with existing Hadoop 2.x -


we have hadoop 2.5 installed on server. possible use hadoop deploy spark programs? want spark use existing yarn schedule tasks , able read , write existing hdfs. how can achieve that?

you can try using apache spark pre-built downloads available https://spark.apache.org/downloads.html

enter image description here

if thats not working out need build spark adding hadoop jars https://spark.apache.org/docs/latest/building-spark.html easy

your spark can directly access hdfs adding config in spark-default config. check configurations available in spark

https://spark.apache.org/docs/latest/configuration.html

your spark can run in yarn local model -https://spark.apache.org/docs/latest/running-on-yarn.html

you need not make new changes in existing hadoop setup make spark work , need configure in spark .


Comments

Popular posts from this blog

How has firefox/gecko HTML+CSS rendering changed in version 38? -

javascript - Complex json ng-repeat -

jquery - Cloning of rows and columns from the old table into the new with colSpan and rowSpan -