這篇文章主要為大家展示了“hadoop中mapreducez如何自定義分區(qū)”,內(nèi)容簡(jiǎn)而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓小編帶領(lǐng)大家一起研究并學(xué)習(xí)一下“hadoop中mapreducez如何自定義分區(qū)”這篇文章吧。

package hello_hadoop;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Partitioner;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class AutoParitionner {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
if(args.length!=2) {
System.err.println("Usage: hadoop jar xxx.jar <input path> <output path>");
System.exit(1);
}
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "avg of grades");
job.setJarByClass(AutoParitionner.class);
job.setMapperClass(PartitionInputClass.class);
job.setReducerClass(PartitionOutputClass.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(DoubleWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DoubleWritable.class);
//聲明自定義分區(qū)的類,下面有類的聲明
job.setPartitionerClass(MyPartitioner.class);
job.setNumReduceTasks(2);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true)?0:1);
}
}
class PartitionInputClass extends Mapper<LongWritable, Text, Text, DoubleWritable>{
@Override
protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, DoubleWritable>.Context context)
throws IOException, InterruptedException {
String line = value.toString();
if(line.length()>0){
String[] array = line.split("\t");
if(array.length==2){
String name=array[0];
int grade = Integer.parseInt(array[1]);
context.write(new Text(name), new DoubleWritable(grade));
}
}
}
}
class PartitionOutputClass extends Reducer<Text, DoubleWritable, Text, DoubleWritable>{
@Override
protected void reduce(Text text, Iterable<DoubleWritable> iterable,
Reducer<Text, DoubleWritable, Text, DoubleWritable>.Context context) throws IOException, InterruptedException {
int sum = 0;
int cnt= 0 ;
for(DoubleWritable iw : iterable) {
sum+=iw.get();
cnt++;
}
context.write(text, new DoubleWritable(sum/cnt));
}
}
//自定義分區(qū)的類
//Partitioner<Text , DoubleWritable > Text,DoubleWirtable分別為map結(jié)果的key,value
class MyPartitioner extends Partitioner<Text , DoubleWritable >{
@Override
public int getPartition(Text text, DoubleWritable value, int numofreuceTask) {
String name = text.toString();
if(name.equals("wd")||name.equals("wzf")||name.equals("xzh")||name.equals("zz")) {
return 0;
}else
return 1;
}
}以上是“hadoop中mapreducez如何自定義分區(qū)”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對(duì)大家有所幫助,如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注創(chuàng)新互聯(lián)-成都網(wǎng)站建設(shè)公司行業(yè)資訊頻道!
分享名稱:hadoop中mapreducez如何自定義分區(qū)-創(chuàng)新互聯(lián)
本文地址:http://chinadenli.net/article26/cosejg.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供域名注冊(cè)、面包屑導(dǎo)航、定制開(kāi)發(fā)、自適應(yīng)網(wǎng)站、建站公司、App開(kāi)發(fā)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)
猜你還喜歡下面的內(nèi)容