Spark iterate HDFS directory

Spark iterate HDFS directory

Asked on December 31, 2018 in Apache-spark.
Add Comment


  • 3 Answer(s)

    Here org.apache.hadoop.fs.FileSystem can be used for specifically, FileSystem.listFiles([path], true)

    By using the spark

    FileSystem.get(sc.hadoopConfiguration()).listFiles(..., true)
    
    

    Editing

    For getting the FileSystem that is associated with the Path’s scheme.

    path.getFileSystem(sc.hadoopConfiguration).listFiles(path, true)
    

     

    Answered on December 31, 2018.
    Add Comment

    Here the below code is related to PySpark version:

    hadoop = sc._jvm.org.apache.hadoop
     
    fs = hadoop.fs.FileSystem
    conf = hadoop.conf.Configuration()
    path = hadoop.fs.Path('/hivewarehouse/disc_mrt.db/unified_fact/')
     
    for f in fs.get(conf).listStatus(path):
        print f.getPath()
    

    In this we could get list of all files that make up disc_mrt.unified_fact Hive table.

    , like getLen() to get file size are described below for other methods of FileStatus object.

    Check the below link:

    Class FileStatus

    Answered on December 31, 2018.
    Add Comment

    Alternatively the below code can be used for solving this for issue Spark version 1.5.0-cdh5.5.2:

    import org.apache.hadoop.fs.{FileSystem,Path}
    FileSystem.get( sc.hadoopConfiguration ).listStatus( new Path("hdfs:///tmp")).foreach( x => println(x.getPath ))
    

    And this will work.

    Answered on December 31, 2018.
    Add Comment


  • Your Answer

    By posting your answer, you agree to the privacy policy and terms of service.