Apache Spark: network errors between executors

Apache Spark: network errors between executors

Asked on January 12, 2019 in Apache-spark.
Add Comment


  • 2 Answer(s)

    In the Netty networking system (block transfer service) there is a bug, In the version Spark 1.2 it is added.

    Here .set(“spark.shuffle.blockTransferService”, “nio”) is added to SparkConf fixed the bug, And now wprks fine.

    In this the same error is seen while running, so here use nio instead of Netty.

    For the version SPARK-5085 is similar, the nio is used to fix the problem; This is solved by changing some networking settings.

    Answered on January 12, 2019.
    Add Comment

    In the Spark server installation, the Maven config is different.

    For instance:

    <dependencies>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11 -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_1.3</artifactId>
            <version>1.3</version>
        </dependency>
     
    </dependencies>
    
    Answered on January 12, 2019.
    Add Comment


  • Your Answer

    By posting your answer, you agree to the privacy policy and terms of service.