Errors when using OFF_HEAP Storage with Spark 1.4.0 and Tachyon 0.6.4

Errors when using OFF_HEAP Storage with Spark 1.4.0 and Tachyon 0.6.4

Asked on November 16, 2018 in Apache-spark.
Add Comment


  • 1 Answer(s)

    There is some related bug report: https://issues.apache.org/jira/browse/SPARK-10314

    So, Here is the pull request for this and there may be a chance to soon get a fix for this.

    From this thread,  it looks like Spark is using TRY_CACHE mode to write to Tachyon so the data seems to get lost when evicted from the cache.

     

    Answered on November 16, 2018.
    Add Comment


  • Your Answer

    By posting your answer, you agree to the privacy policy and terms of service.