Apache Spark: network errors between executors
In the Netty networking system (block transfer service) there is a bug, In the version Spark 1.2 it is added.
Here .set(“spark.shuffle.blockTransferService”, “nio”) is added to SparkConf fixed the bug, And now wprks fine.
In this the same error is seen while running, so here use nio instead of Netty.
For the version SPARK-5085 is similar, the nio is used to fix the problem; This is solved by changing some networking settings.
In the Spark server installation, the Maven config is different.
<dependencies> <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11 --> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_1.3</artifactId> <version>1.3</version> </dependency> </dependencies>