questions/apache-spark/page/4
Hey, A sparse vector is used for storing ...READ MORE
I used Spark 1.5.2 with Hadoop 2.6 ...READ MORE
Hey, For this purpose, we use the single ...READ MORE
Hey, You need to follow some steps to complete ...READ MORE
Hi, You need to edit one property in ...READ MORE
scala> val rdd1 = sc.parallelize(List(1,2,3,4,5)) - Creating ...READ MORE
You can do this using globbing. See ...READ MORE
Hey, You can try this: from pyspark import SparkContext SparkContext.stop(sc) sc ...READ MORE
In a Spark application, when you invoke ...READ MORE
Hey @c.kothamasu You should copy your file to ...READ MORE
Start spark shell using below line of ...READ MORE
Hi, Apache Spark is an advanced data processing ...READ MORE
what is the benefit of repartition(1) and ...READ MORE
Hi, To launch spark application in cluster mode, ...READ MORE
Hey, Use SparkContext.union(...) instead to union many RDDs at once You ...READ MORE
Hi, You can resolve this error with a ...READ MORE
You need to declare the variable which ...READ MORE
Try including the package while starting the ...READ MORE
The cache() is used only the default storage level ...READ MORE
Seems like Spark hadoop daemons are not ...READ MORE
Hey, You can try this code to get ...READ MORE
Hey, It already has SparkContent.union and it does know how to ...READ MORE
Hey, It takes a function that operates on two ...READ MORE
Hey, ofDim() is a method in Scala that ...READ MORE
Hi, You can try this remove brackets from ...READ MORE
SparkContext sets up internal services and establishes ...READ MORE
You can select the column and apply ...READ MORE
Source tags are different: { x : [ { ...READ MORE
First, reboot the system. And after reboot, ...READ MORE
This should work: def readExcel(file: String): DataFrame = ...READ MORE
Hey, Here is the example of which will return ...READ MORE
Assuming your RDD[row] is called rdd, you ...READ MORE
Hey, You can use the subtractByKey () function to ...READ MORE
You can try the below code: df.registerTempTable(“airports”) sqlContext.sql(" create ...READ MORE
Converting text file to Orc: Using Spark, the ...READ MORE
Refer to the below code: import org.apache.hadoop.conf.Configuration import org.apache.hadoop.fs.FileSystem import ...READ MORE
After downloading Spark, you need to set ...READ MORE
By default, the timeout is set to ...READ MORE
Hey, you can use "contains" filter to extract ...READ MORE
Hey, Jobs- to view all the spark jobs Stages- ...READ MORE
Try this code: val rdd= sc.textFile (“file.txt”, 5) rdd.partitions.size Output ...READ MORE
Hey, There are few methods provided by the ...READ MORE
Hi, Regarding this error, you just need to change ...READ MORE
I found the following solution to be ...READ MORE
Did you find any documents or example ...READ MORE
Hey, Lineage is an RDD process to reconstruct ...READ MORE
The reason you are able to load ...READ MORE
All prefix operators' symbols are predefined: +, -, ...READ MORE
Well, it depends on the block of ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.