1 Phoenix远程无法连接但是本地可以连接,详细异常
SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/zhangjin/developSoftware/mavenRepository/org/slf4j/slf4j-log4j12/1.7.16/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/Users/zhangjin/mycode/wm/wm-bigdata-etl/zjars/phoenix-4.14.1-cdh5.16.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions: Fri May 31 11:06:02 CST 2019, null, java.net.socketTimeoutException: callTimeout=60000, callDuration=68669: row 'SYstem:CATALOG,,' on table 'hbase:Meta' at region=hbase:Meta,,1.1588230740, hostname=localhost,16201,1559271031589, seqNum=0 at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197) at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491) at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2725) at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114) at org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378) at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2536) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2499) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2499) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:256) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:222) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113) at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58) at org.apache.phoenix.mapreduce.util.PhoenixCon@R_404[email protected]MetadataList(PhoenixCon@R_404[email protected]:354) at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:118) at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:60) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:403) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167) at org.apache.spark.sql.sqlContext.load(sqlContext.scala:960) at com.wm.bigdata.spark.etl.ETLDemo$.main(ETLDemo.scala:40) at com.wm.bigdata.spark.etl.ETLDemo.main(ETLDemo.scala) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Fri May 31 11:06:02 CST 2019, null, java.net.socketTimeoutException: callTimeout=60000, callDuration=68669: row 'SYstem:CATALOG,,' on table 'hbase:Meta' at region=hbase:Meta,,1.1588230740, hostname=localhost,16201,1559271031589, seqNum=0 at org.apache.hadoop.hbase.client.RpcretryingCallerWithReadReplicas.throwEnrichedException(RpcretryingCallerWithReadReplicas.java:320) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:247) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:62) at org.apache.hadoop.hbase.client.RpcretryingCaller.callWithoutRetries(RpcretryingCaller.java:210) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:637) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:424) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1097) ... 31 more Caused by: java.net.socketTimeoutException: callTimeout=60000, callDuration=68669: row 'SYstem:CATALOG,,' on table 'hbase:Meta' at region=hbase:Meta,,1.1588230740, hostname=localhost,16201,1559271031589, seqNum=0 at org.apache.hadoop.hbase.client.RpcretryingCaller.callWithRetries(RpcretryingCaller.java:169) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$queueingFuture.run(ResultBoundedCompletionService.java:80) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.socketChannelImpl.checkConnect(Native Method) at sun.nio.ch.socketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.socketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) at org.apache.hadoop.hbase.ipc.RpcclientImpl$Connection.setupConnection(RpcclientImpl.java:417) at org.apache.hadoop.hbase.ipc.RpcclientImpl$Connection.setupIOstreams(RpcclientImpl.java:723) at org.apache.hadoop.hbase.ipc.RpcclientImpl$Connection.writeRequest(RpcclientImpl.java:907) at org.apache.hadoop.hbase.ipc.RpcclientImpl$Connection.tracedWriteRequest(RpcclientImpl.java:874) at org.apache.hadoop.hbase.ipc.RpcclientImpl.call(RpcclientImpl.java:1246) at org.apache.hadoop.hbase.ipc.AbstractRpcclient.callBlockingMethod(AbstractRpcclient.java:227) at org.apache.hadoop.hbase.ipc.AbstractRpcclient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcclient.java:336) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:400) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:65) at org.apache.hadoop.hbase.client.RpcretryingCaller.callWithoutRetries(RpcretryingCaller.java:210) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$retryingRPC.call(ScannerCallableWithReplicas.java:397) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$retryingRPC.call(ScannerCallableWithReplicas.java:371) at org.apache.hadoop.hbase.client.RpcretryingCaller.callWithRetries(RpcretryingCaller.java:136) ... 4 more Process finished with exit code 1
2 仔细观察异常信息,发现连接是localhost信息,如果是本机访问当然没有问题,但是远程访问肯定就有问题,知道问题所在,开始排查
3 语网友说是要设置dns,感觉太复杂了,
https://blog.51cto.com/yaoyinjie/650902 按照这个思路还需要设置dns,搞了半天也没搞明白,只好放弃 4 里面提到 Zookeeper客户端连接后,get /hbase/master 发现抛出的是localhost,定位到就是这里的问题,Phoenix通过Zookeeper的地址连接后,访问域名是根据Zookeeper节点里面存储的/hbase/master返回的值来定位的,那么现在就是想办法修改这个值 5 重新初始化后,设置hbase-site.xml 添加nameserver配置 重新初始化 然后 查询是域名后,搞定,hbase-site.xml配置,我的主机名是hdp,全部改成主机名,然后删除Zookeeper中 /hbase目录,重启hbase,再进入Zookeeper终端 检查,如果是get /hbase/master 能看到主机名hdp 或者对应的,那就是成功了<property> <name>hbase.master.dns.nameserver</name> <value>hdp</value> <description>The host name or IP address of the name server (DNS) which a master should use to determine the host name used for communication and display purposes. </description> </property> <property> <name>hbase.regionserver.dns.nameserver</name> <value>hdp</value> <description>The host name or IP address of the name server (DNS) which a region server should use to determine the host name used by the master for communication and display purposes. </description> </property> <property> <name>hbase.rootdir</name> <value>hdfs://hdp:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.master</name> <value>hdfs://hdp:60000</value> </property>
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。