94812/from-schema-rdd-data-can-cache-by-which-one-the-given-choices
Hi, @Ritu,
According to the official documentation of Spark 1.2, Spark SQL can cache tables using an in-memory columnar format by calling sqlContext.cacheTable("tableName").
A Dataframe can be created from an ...READ MORE
Hi@ritu, You can create a data frame from ...READ MORE
Hi@ritu, Spark DStream (Discretized Stream) is the basic ...READ MORE
Hi@ritu, I think the problem can be solved ...READ MORE
Hi@ritu, You need to learn the Architecture of ...READ MORE
17)from the given choices, identify the value ...READ MORE
Hi@ritu, AWS has lots of services. For spark ...READ MORE
You have to use the comparison operator ...READ MORE
Hi, @Ritu, option b for you, as Hash Partitioning ...READ MORE
Hi, @Ritu, When creating a pair RDD from ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.