How to access Hadoop counter values using API

–1 vote

In Hadoop we can increment the counter in map/reduce task using this:

context.getCounter(MyCountersEnum.SomeCounter).increment(1);

and then, the value is available in the log file. 

Suppose I want to access them from code after the job completes, which Hadoop API should I use to read the counter value?

Dec 31, 2018 in Big Data Hadoop by digger
• 26,740 points
1,154 views

1 answer to this question.

0 votes

You can use the job object to access the counter. Try this:

Counters counters = job.getCounters();
Counter counter = counters.findCounter(MyCountersEnum.SomeCounter);
System.out.println(counter.getDisplayName() + ": " + counter.getValue());
answered Dec 31, 2018 by Omkar
• 69,220 points

Related Questions In Big Data Hadoop

0 votes
1 answer

How to access Hadoop Data using REST service?

The REST API gateway for the Apache ...READ MORE

answered Sep 5, 2018 in Big Data Hadoop by Frankie
• 9,830 points
4,443 views
0 votes
1 answer

How to programmatically access hadoop cluster where kerberos is enable?

Okay,here's the code snippet to work in the ...READ MORE

answered Mar 27, 2018 in Big Data Hadoop by coldcode
• 2,090 points
7,413 views
0 votes
1 answer

How to get started with Hadoop and do some development using Eclipse IDE?

Alright, there are couple of things that ...READ MORE

answered Apr 4, 2018 in Big Data Hadoop by Ashish
• 2,650 points
2,126 views
+1 vote
2 answers

How to authenticate username & password while using Connector for Cloudera Hadoop in Tableau?

Hadoop server installed was kerberos enabled server. ...READ MORE

answered Aug 21, 2018 in Big Data Hadoop by Priyaj
• 58,020 points
1,680 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,028 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,536 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
108,832 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
4,612 views
0 votes
3 answers

How to specify KeyValueTextInputFormat Separator in Hadoop-.20 api?

conf.set("key.value.separator.in.input.line", ","); Job job = new ...READ MORE

answered Dec 5, 2018 in Big Data Hadoop by Rio
1,916 views
0 votes
1 answer

Hadoop: How to keep duplicates in Hive using collect_set()?

SELECT hash_id, COLLECT_LIST(num_of_cats) AS ...READ MORE

answered Nov 2, 2018 in Big Data Hadoop by Omkar
• 69,220 points
2,519 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP