Hvorfor har Hadoop på Windows forsøgt at forbinde 0,0.0.0:10020 (uden succes)?

Indlæg af Hanne Mølgaard Plasc

Problem



Jeg har installeret HadoopWindows ifølge denne artile og kan nu køre testapplikation hadoop-mapreduce-examples-X.Y.Z.jar. [11]


Desværre begynder jeg at få adgang til en mærkelig adresse 0.0.0.0:10020, når jeg starter fuldskalaapplikation. Har ændret min DFS config til <value>hdfs://0.0.0.0</value>, men det hjalp ikke.


Undtagelsen følger:


[Thread-14] INFO org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob - Job status available at: http://lagrangian:8088/proxy/application\_1525212500911\_0002/
[Thread-14] ERROR org.apache.crunch.impl.mr.exec.MRExecutor - Pipeline failed due to exception
java.io.IOException: java.io.IOException: java.net.ConnectException: Call From lagrangian/169.254.105.43 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.crunch.impl.mr.exec.CrunchJobHooks$CompletionHook.handleMultiPaths(CrunchJobHooks.java:92)
        at org.apache.crunch.impl.mr.exec.CrunchJobHooks$CompletionHook.run(CrunchJobHooks.java:79)
        at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.checkRunningState(CrunchControlledJob.java:288)
        at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.checkState(CrunchControlledJob.java:299)
        at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.checkRunningJobs(CrunchJobControl.java:193)
        at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:313)
        at org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:131)
        at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:58)
        at org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:90)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.net.ConnectException: Call From lagrangian/169.254.105.43 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:344)
        at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:429)
        at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:617)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
        at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
        at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
        at org.apache.hadoop.mapreduce.Job.isSuccessful(Job.java:616)
        at org.apache.crunch.impl.mr.exec.CrunchJobHooks$CompletionHook.handleMultiPaths(CrunchJobHooks.java:84)
        ... 9 more
Caused by: java.net.ConnectException: Call From lagrangian/169.254.105.43 to 0.0.0.0:10020 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1493)
        at org.apache.hadoop.ipc.Client.call(Client.java:1435)
        at org.apache.hadoop.ipc.Client.call(Client.java:1345)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy20.getJobReport(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
        at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:325)
        ... 19 more
Caused by: java.net.ConnectException: Connection refused: no further information
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788)
        at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1550)
        at org.apache.hadoop.ipc.Client.call(Client.java:1381)
        ... 28 more


Jeg læser det er sandsynligvis relateret til Job History Server, men jeg er ikke sikker på, hvordan man kører den på Windows.

Bedste reference


Sandsynligvis fordi JobHistory serveren ikke er startet. Du kan køre den ved hjælp af


mapred historyserver


Skal være meget ens mellem Windows og Linux. Kontroller log udgang og jps for at kontrollere, at den kører.


Dine serviceadresser skal ideelt set være et værtsnavn (men ikke lokalhost), mens 0,0.0.0 vil få dem til at lytte på alle adresser