java – Amazon EMR:使用S3的输入和输出运行Custom Jar
发布时间:2020-12-15 03:05:34 所属栏目:Java 来源:网络整理
导读:我正在尝试运行具有自定义jar步骤的EMR集群.程序从S3获取输入并输出到S3(或者至少这是我想要完成的).在步骤配置中,我在arguments字段中有以下内容: v3.MaxTemperatureDrivers3n://hadoopbook/ncdc/alls3n://hadoop-szhu/max-temp 其中hadoopbook / ncdc / a
|
我正在尝试运行具有自定义jar步骤的EMR集群.程序从S3获取输入并输出到S3(或者至少这是我想要完成的).在步骤配置中,我在arguments字段中有以下内容:
v3.MaxTemperatureDriver s3n://hadoopbook/ncdc/all s3n://hadoop-szhu/max-temp 其中hadoopbook / ncdc / all是包含输入数据的存储桶的路径(作为旁注,我正在运行的示例来自此book),而hadoop-szhu是我自己的存储桶,我想存储输出.在此post之后,我的MapReduce驱动程序如下所示: package v3;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import v1.MaxTemperatureReducer;
public class MaxTemperatureDriver extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.err.printf("Usage: %s [generic options] <input> <output>n",getClass().getSimpleName());
ToolRunner.printGenericCommandUsage(System.err);
return -1;
}
Job job = new Job(getConf(),"Max temperature");
job.setJarByClass(getClass());
FileInputFormat.addInputPath(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
job.setMapperClass(MaxTemperatureMapper.class);
job.setCombinerClass(MaxTemperatureReducer.class);
job.setReducerClass(MaxTemperatureReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
return job.waitForCompletion(true) ? 0 : 1;
}
public static void main(String[] args) throws Exception {
int exitCode = ToolRunner.run(new MaxTemperatureDriver(),args);
System.exit(exitCode);
}
}
但是,当我尝试运行它时,我收到以下错误: Exception in thread "main" java.io.IOException: No FileSystem for scheme: s3n 我还尝试使用以下方法将数据从s3复制到集群(在sshing进入主节点后运行): hadoop distcp -Dfs.s3n.awsAccessKeyId='...' -Dfs.s3n.awsSecretAccessKey='...' s3n://hadoopbook/ncdc/all input/ncdc/all 但是我收到了一些错误,我在下面列出了一段摘录: 2016-09-03 07:07:11,858 FATAL [IPC Server handler 6 on 43495] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1472884232220_0001_m_000000_0 - exited : java.io.IOException: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:224)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
... 10 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.getFileStatus(EmrFileSystem.java:511)
at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219)
... 9 more
我不确定问题出在哪里,但我很乐意提供更多细节(请在下面评论).谢谢! 解决方法
s3n://是旧协议,你应该使用s3://
参考:http://docs.aws.amazon.com//ElasticMapReduce/latest/ManagementGuide/emr-plan-file-systems.html (编辑:李大同) 【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容! |
