34144/how-we-can-run-spark-sql-over-hive-tables-in-our-cluster
Open spark-shell.
scala> import org.apache.spark.sql.hive._ scala> val hc = new HiveContext(sc) hc.sql("your query").show()
Open Spark-shell
scala>val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
scala>sqlContext.sql("CREATE TABLE IF NOT EXISTS employee(id INT, name STRING, age INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'")
scala>sqlContext.sql("LOAD DATA LOCAL INPATH 'employee.txt' INTO TABLE employee")
scala>val result = sqlContext.sql("FROM employee SELECT id, name, age")
scala>result.show()
Hey, The following examples are simple, common data ...READ MORE
Well, what you can do is use ...READ MORE
Hey, Basically, hive-site.xml file has to be configured ...READ MORE
Hey, Yes, now Hive supports IN or EXIST, ...READ MORE
Yes, you can update the data before ...READ MORE
Hi@akhtar, I think mysql is not able to ...READ MORE
Yes, your approach is correct - you ...READ MORE
sudo service mysqld restart mysql -u <username> root ...READ MORE
Hey, you can try something like this: df.write.partitionBy('year', ...READ MORE
There are two conditions for no. of ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.