加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 综合聚焦 > 服务器 > 安全 > 正文

docker – Sqoop – 导入作业失败

发布时间:2020-12-16 03:42:26 所属栏目:安全 来源:网络整理
导读:我试图通过Sqoop将一个包含3200万条记录的表从SQL Server导入Hive.连接是SQL Server成功的.但Map / Reduce作业无法成功执行.它给出以下错误: 18/07/19 04:00:11 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:803218/07/19 04:00:27 D

我试图通过Sqoop将一个包含3200万条记录的表从SQL Server导入Hive.连接是SQL Server成功的.但Map / Reduce作业无法成功执行.它给出以下错误:

18/07/19 04:00:11 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
18/07/19 04:00:27 DEBUG db.DBConfiguration: Fetching password from job credentials store
18/07/19 04:00:27 INFO db.DBInputFormat: Using read commited transaction isolation
18/07/19 04:00:27 DEBUG db.DataDrivenDBInputFormat: Creating input split with lower bound '1=1' and upper bound '1=1'
18/07/19 04:00:28 INFO mapreduce.JobSubmitter: number of splits:1
18/07/19 04:00:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1531917395459_0002
18/07/19 04:00:30 INFO impl.YarnClientImpl: Submitted application application_1531917395459_0002
18/07/19 04:00:30 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1531917395459_0002/
18/07/19 04:00:30 INFO mapreduce.Job: Running job: job_1531917395459_0002
    18/07/19 04:43:02 INFO mapreduce.Job: Job job_1531917395459_0002 running in uber mode : false
18/07/19 04:43:03 INFO mapreduce.Job:  map 0% reduce 0%
18/07/19 04:43:04 INFO mapreduce.Job: Job job_1531917395459_0002 failed with state FAILED due to: Application application_1531917395459_0002 failed 2 times due to ApplicationMaster for attempt appattempt_1531917395459_0002_000002 timed out. Failing the application.
18/07/19 04:43:08 INFO mapreduce.Job: Counters: 0
18/07/19 04:43:08 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/07/19 04:43:09 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 2,576.6368 seconds (0 bytes/sec)
18/07/19 04:43:10 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
18/07/19 04:43:10 INFO mapreduce.ImportJobBase: Retrieved 0 records.
18/07/19 04:43:10 ERROR tool.ImportTool: Error during import: Import job failed!

以下是yarn-site.xml文件的配置

首先,当通过0.0.0.0:8032连接资源管理器时,工作被卡住了.所以我将主机更改为127.0.0.1.然后执行继续进行.但后来发生了上述错误.即使我尝试只用1000行执行此作业,但同样的错误.此外,有时候工作会被杀死.

这是我的sqoop命令

sqoop import --connect "jdbc:sqlserver://system-ip;databaseName=TEST" --driver com.microsoft.sqlserver.jdbc.SQLServerDriver --username user1 --password password --hive-import --create-hive-table --hive-table "customer_data_1000" --table "customer_data_1000" --split-by Account_Branch_Converted -m 1 --verbose

这是我的docker命令,以防万一:

  docker run --hostname=quickstart.cloudera --privileged=true -t -p 127.0.0.1:8888:8888 -p 127.0.

0.1:7180:7180 -p 127.0.0.1:50070:50070 -i 7c41929668d8 /usr/bin/docker-quickstart

这是资源管理器日志:

          2018-07-26 07:18:26,439 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: Expired:appattempt_1532588462827_0001_000001 Timed out after 600 secs
        2018-07-26 07:24:03,059 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Updating application attempt appattempt_1532588462827_0001_000001 with final state: FAILED,and exit status: -1000
        2018-07-26 07:35:46,609 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1532588462827_0001_000001 State change from LAUNCHED to FINAL_SAVING
        2018-07-26 07:35:49,502 INFO org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: Expired:quickstart.cloudera:36003 Timed out after 600 secs
        2018-07-26 07:39:44,485 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Deactivating Node quickstart.cloudera:36003 as it is now LOST
        2018-07-26 07:44:39,238 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: quickstart.cloudera:36003 Node Transitioned from RUNNING to LOST
        2018-07-26 07:45:09,895 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1532588462827_0001_000001
        2018-07-26 07:49:43,848 INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceTrackerService: Node not found resyncing quickstart.cloudera:36003
        2018-07-26 07:49:43,916 INFO org.apache.hadoop.yarn.server.resourcemanager.security.AMRMTokenSecretManager: Application finished,removing password for appattempt_1532588462827_0001_000001
        2018-07-26 07:49:45,738 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1532588462827_0001_000001 State change from FINAL_SAVING to FAILED
        2018-07-26 07:49:47,095 WARN org.apache.hadoop.ipc.Server: IPC Server handler 12 on 8032,call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplicationReport from 127.0.0.1:45162 Call#608 Retry#0: output error
        2018-07-26 07:49:47,100 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The number of failed attempts is 1. The max attempts is 2
        2018-07-26 07:49:47,887 INFO org.apache.hadoop.ipc.Server: IPC Server handler 12 on 8032 caught an exception
        java.nio.channels.ClosedChannelException
                at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
                at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
                at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2621)
                at org.apache.hadoop.ipc.Server.access$1900(Server.java:134)
                at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:989)
                at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1054)
                at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2141)
        2018-07-26 07:49:49,127 INFO org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1532588462827_0001_000002
        2018-07-26 07:49:49,127 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1532588462827_0001_000002 State change from NEW to SUBMITTED
        2018-07-26 07:49:49,127 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1532588462827_0001_000001
        2018-07-26 07:49:50,458 INFO org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl: container_1532588462827_0001_01_000001 Container Transitioned from RUNNING to KILLED
        2018-07-26 07:49:50,459 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt: Completed container: container_1532588462827_0001_01_000001 in state: KILLED event:KILL
        2018-07-26 07:49:50,460 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root     OPERATION=AM Released Container TARGET=SchedulerApp     RESULT=SUCCESS  APPID=application_1532588462827_0001    CONTAINERID=container_1532588462827_0001_01_000001
        2018-07-26 07:49:50,550 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode: Released container container_1532588462827_0001_01_000001 of capacity 

我哪里错了?

最佳答案
我不能给你一个精确的解决方案,但我能做的是告诉你根本原因是什么:

>尝试以非root用户身份运行Sqoop作业.
>检查主机上是否正确安装了JDK,并正确设置了JAVA_HOME.
>检查您是否已授予您正在使用的数据库的正确权限.

由于上述原因之一,您的工作失败了.因为你有足够的v-cores和内存可用,并且还在创建容器.因此,处理端的所有内容都很好,但必须存在配置错误.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读