How to change memory per node for apache spark worker

How to change memory per node for apache spark worker

Asked on December 8, 2018 in Apache-spark.
Add Comment

  • 3 Answer(s)

    While using 1.0.0+ and using spark-shell or spark-submit, The –executor-memory option is used.

    For instance:

    spark-shell --executor-memory 8G ...

    For the version 0.9.0 and under:

    While starting the shell change the memory. And need to modify the spark-shell script so that it would carry command line arguments through as arguments for the underlying java application. In particular:

    OPTIONS="[email protected]"
    $FWDIR/bin/spark-class $OPTIONS org.apache.spark.repl.Main "[email protected]"

    Then run our spark shell as follows:

    spark-shell -Dspark.executor.memory=6g

    When configuring it for a standalone jar, I set the system property programmatically before creating the spark context and pass the value in as a command line argument (I can make it shorter than the long winded system props then).

    System.setProperty("spark.executor.memory", valueFromCommandLine)

    Note: When we have 2 nodes with 2GB and one with 6GB. The memory you can use will be limited to the smallest node – so here 2GB.

    Answered on December 8, 2018.
    Add Comment

    For the version Spark 1.1.1, to set the Max Memory of workers. in conf/,

    Try this:


    Just copy the template file when the config file is not used.

    cp conf/ conf/

    Then make the change and don’t forget to source it

    source conf/
    Answered on December 8, 2018.
    Add Comment

    Here ipython notebook server is used to connect to spark. To increase the memory for executor.

    Try this method:

    from pyspark import SparkContext
    from pyspark.conf import SparkConf
    conf = SparkConf()
    conf.setMaster(CLUSTER_URL).setAppName('ipython-notebook').set("spark.executor.memory", "2g")
    sc = SparkContext(conf=conf)


    Answered on December 8, 2018.
    Add Comment

  • Your Answer

    By posting your answer, you agree to the privacy policy and terms of service.