questions/big-data-hadoop
For the above requirement, the memory consumption ...READ MORE
Hey, This Hadoop fs command appends single sources ...READ MORE
The command you are typing is incorrect. ...READ MORE
Hi, You can use this command given below: Syntax: $ ...READ MORE
The above difference clearly points out that ...READ MORE
Hey, Yes, you can rename a table in ...READ MORE
With MR2, now we should set conf.set("mapreduce.map.output.compress", true) conf.set("mapreduce.output.fileoutputformat.compress", ...READ MORE
By default, the Hive database will be ...READ MORE
Hi, The different Daemons in YARN are:- Resources Manager:- Runs ...READ MORE
Hey, The example uses HBase Shell to keep ...READ MORE
The reason why you get this error ...READ MORE
As much I understand Reduce phase start ...READ MORE
Hey, According to your property set up, there ...READ MORE
Hi, Yes, you can do it by using ...READ MORE
It is looking like your Hadoop daemons ...READ MORE
You can either install java and eclipse ...READ MORE
In edureka vm once you start hbase ...READ MORE
Hey, DAG in Hive architecture is nothing but ...READ MORE
Hey. This error usually occurs when the ...READ MORE
Job tracker's function is resource management, tracking ...READ MORE
Hey, As the error suggested that you have ...READ MORE
All you have to do to install ...READ MORE
Hey, There are some of the key responsibilities ...READ MORE
Hey, Basically, when we want to run multiple jobs ...READ MORE
Hi, You can create the export folder, you will ...READ MORE
Job job = new Job(conf,"job_name"), is just ...READ MORE
Hadoop software framework work is very well ...READ MORE
Here's a list of Input Formats: CombineFileInputFormat CombineS ...READ MORE
Unfortunately, this can't be achieved with open ...READ MORE
The best thing with Millions Songs Dataset ...READ MORE
1 - Spark if following slave/master architecture. So ...READ MORE
You can use the split function along ...READ MORE
Please find the code below for alphabet ...READ MORE
You can choose your mapper output key ...READ MORE
The main reason for job.waitForCompletion exists is that ...READ MORE
It is very straight forward, no need ...READ MORE
You can use the import-all-tables option and along with ...READ MORE
To upload a file from your local ...READ MORE
it as a connector that allows data to flow bi-directionaly so ...READ MORE
Hey, This is because the user directory not ...READ MORE
After creating the tables a1 and b1 ...READ MORE
You can do it using the following ...READ MORE
Spark has much lower per job and ...READ MORE
from pyspark.sql.functions import monotonically_increasing_id df.withColumn("id", monotonically_increasing_id()).show() Verify the second ...READ MORE
If you don't want to turn off ...READ MORE
When you are loading two different files, ...READ MORE
Hey, The Master and RegionServer both participate in ...READ MORE
The first column is denoted by $0, ...READ MORE
Hey, The error you got because you might ...READ MORE
Below are the services Running in Hadoop Hdfs yarn mapreduce ozzie zookeeper hive hue hbase impala flume sqoop spark Depending ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.