How to change the spark Session configuration in Pyspark

0 votes

I am trying to change the default configuration of Spark Session. But it is not working.

spark_session  = SparkSession.builder
                      .master("ip")
                      .enableHiveSupport()
                      .getOrCreate()

spark_session.conf.set("spark.executor.memory", '8g')
spark_session.conf.set('spark.executor.cores', '3')
spark_session.conf.set('spark.cores.max', '3')
spark_session.conf.set("spark.driver.memory",'8g')
sc = spark_session.sparkContext

But if I put the configuration in Spark submit, then it works fine for me.

spark-submit --master ip --executor-cores=3 --diver 8G sample.py
May 29, 2018 in Apache Spark by code799
125,592 views

5 answers to this question.

0 votes

You are not changing the configuration of PySpark. Just open pyspark shell and check the settings:

sc.getConf().getAll()

Now you can execute the code and again check the setting of the Pyspark shell.

You first have to create conf and then you can create the Spark Context using that configuration object.

config = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '3'), ('spark.cores.max', '3'), ('spark.driver.memory','8g')])
sc.stop()
sc = pyspark.SparkContext(conf=config)

After that it will work.

To know more about Pyspark, it's recommended that you join PySpark Training today.

Thanks.

answered May 29, 2018 by Shubham
• 13,490 points
0 votes

Adding to Shubham's answer, after updating the configuration, you have to stop the spark session and create a new spark session.

spark.sparkContext.stop()
spark = SparkSession.builder.config(conf=conf).getOrCreate()
answered Dec 10, 2018 by Hilight
0 votes

This should work

spark = SparkSession.builder.config(conf=conf1).getOrCreate()
sc = spark.sparkContext
answered Dec 10, 2018 by Shikar
0 votes

You can dynamically load properties. First create a new empty conf and then pass your conf on run-time:

val sc = new SparkContext(new SparkConf())
spark-submit --master ip --executor-cores=3 --diver 8G sample.py
answered Dec 10, 2018 by Vini
0 votes

You aren't actually overwriting anything with this code. Just so you can see for yourself try the following.

As soon as you start pyspark shell type:

sc.getConf().getAll()

This will show you all of the current config settings. Then try your code and do it again. Nothing changes.

What you should do instead is create a new configuration and use that to create a SparkContext. Do it like this:

conf = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '3'), ('spark.cores.max', '3'), ('spark.driver.memory','8g')])
sc.stop()
sc = pyspark.SparkContext(conf=conf)

Then you can check yourself just like above with:

sc.getConf().getAll()

This should reflect the configuration you wanted.

answered Dec 14, 2020 by Gitika
• 65,770 points

Related Questions In Apache Spark

+1 vote
8 answers

How to print the contents of RDD in Apache Spark?

Save it to a text file: line.saveAsTextFile("alicia.txt") Print contains ...READ MORE

answered Dec 10, 2018 in Apache Spark by Akshay
61,790 views
0 votes
1 answer

How to change the location of Spark event logs?

You can change the location where you ...READ MORE

answered Mar 6, 2019 in Apache Spark by Rohit
4,528 views
0 votes
1 answer

How to change commiter algorithm version in Spark?

To change to version 2, run the ...READ MORE

answered Mar 10, 2019 in Apache Spark by Siri
3,311 views
0 votes
1 answer

How to change scheduling mode in Spark?

You can change the scheduling mode as ...READ MORE

answered Mar 12, 2019 in Apache Spark by Raj
2,318 views
+1 vote
2 answers
0 votes
1 answer

Is it possible to run Apache Spark without Hadoop?

Though Spark and Hadoop were the frameworks designed ...READ MORE

answered May 3, 2019 in Big Data Hadoop by ravikiran
• 4,620 points
1,247 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,026 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,828 views
0 votes
1 answer
0 votes
2 answers

In a Spark DataFrame how can I flatten the struct?

// Collect data from input avro file ...READ MORE

answered Jul 4, 2019 in Apache Spark by Dhara dhruve
6,109 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP