reduceByKey: How does it work internally ?

reduceByKey: How does it work internally ?

Asked on November 24, 2018 in Apache-spark.
Add Comment


  • 2 Answer(s)

    For the discrete methods and types. This is usually exposes the intricacies for new devs:

    pairs.reduceByKey((a, b) => a + b)
    

    Then it becomes

    pairs.reduceByKey((a: Int, b: Int) => a + b)
    

    During renaming the variables makes it a little more stated clearly

    pairs.reduceByKey((accumulatedValue: Int, currentValue: Int) => accumulatedValue + currentValue)
    

    Here just taking an accumulated value for the given key and summing it with the next value of that key. NOW, for the further we could understand the key part. So, the method is visualized more like this:

    pairs.reduce((accumulatedValue: List[(String, Int)], currentValue: (String, Int)) => {
      //Turn the accumulated value into a true key->value mapping
      val accumAsMap = accumulatedValue.toMap
      //Try to get the key's current value if we've already encountered it
      accumAsMap.get(currentValue._1) match {
        //If we have encountered it, then add the new value to the existing value and overwrite the old
        case Some(value : Int) => (accumAsMap + (currentValue._1 -> (value + currentValue._2))).toList
        //If we have NOT encountered it, then simply add it to the list
        case None => currentValue :: accumulatedValue
      }
    })
    

    So, Here we could notice that the reduceByKey takes the boilerplate of finding the key and tracking it so there will no worries for managing that part.

    Answered on November 24, 2018.
    Add Comment

    Here this is alternative solution

    For instance:

    val counts = pairs.reduceByKey((a,b) => a+b)
    

         In the above code a and b are both Int accumulators for _2 of the tuples in pairs. reduceKey will take two tuples with the same value s and use their _2 values as a and b, producing a new Tuple[String,Int]. This operation is done again and again until there is only one tuple for each key s.

         With dissimilar of non-Spark (or, really, non-parallel) reduceByKey in which the first element is always the accumulator and the second a value, reduceByKey operates in a distributed fashion, This says each node will reduce it’s set of tuples into a collection of uniquely-keyed tuples and then reduce the tuples from multiple nodes until there is a final uniquely-keyed set of tuples. As the results from nodes are reduced, a and b represent already reduced accumulators.

    Answered on November 24, 2018.
    Add Comment


  • Your Answer

    By posting your answer, you agree to the privacy policy and terms of service.