from pyspark import SparkFiles
rdd=sc.textFile("emp/employees/part-m-00000")
rdd.map(lambda line: line.upper()).collect()
This code is executing with no issues . But my file is present in
/user/edureka_536711/emp/employees/ part-m-00000
I am not sure how the path /user/edureka_536711/ is passing by default and below code is failing :
def get_hdfspath(filename):
my_hdfs="user/{0}".format(user_id.lower())
return os.path.join(my_hdfs,filename)
rdd=sc.textFile(sample)
rdd.map(lambda line: line.upper()).collect()
Can you help here?