How to work efficiently with SBT, Spark and “provided” dependencies ?

How to work efficiently with SBT, Spark and “provided” dependencies ?

Asked on January 12, 2019 in Apache-spark.
Add Comment


  • 3 Answer(s)

    For running the Spark application from IntelliJ IDEA, The main class in the src/test/scala directory (test, not main) is created. Provided dependencies is picked up by IntelliJ.

    object Launch {
      def main(args: Array[String]) {
        Main.main(args)
      }
    }
    
    Answered on January 12, 2019.
    Add Comment

    In this the new ‘Include dependencies is used with “Provided” scope’ in an IntelliJ configuration.”

    With IntelliJ, the easy way to add provided dependencies to debug a task :

    • Here src/main/scala is right-clicked.
    • And Mark Directory as… > Test Sources Root is selected.

    It says that IntelliJ to treat src/main/scala as a test folder for which all the dependencies tagged are added as provided to any run config (debug/run).

    Whenever SBT is refreshed, repeat these step as IntelliJ will reset the folder to a regular source folder.

    Answered on January 12, 2019.
    Add Comment

    Here it is related on creating another subproject for running the project locally which as follows:

    We need to change the build.sbt file by:

    lazy val sparkDependencies = Seq(
    
    "org.apache.spark" %% "spark-streaming" % sparkVersion
    
    )
    libraryDependencies ++= sparkDependencies.map(_ % "provided")
    lazy val localRunner = project.in(file("mainRunner")).dependsOn(RootProject(file("."))).settings(
    
    libraryDependencies ++= sparkDependencies.map(_ % "compile")
    
    )
    

    After that the new subproject is ran locally with the classpath of module is used: localRunner under the Run Configuration.

    Answered on January 12, 2019.
    Add Comment


  • Your Answer

    By posting your answer, you agree to the privacy policy and terms of service.