【Flume教程三】Flume异常总结
目录
Flume异常总结,Flume异常总结,Flume异常总结
转载务必注明:https://cpp.la/481.html
一、Flume OutOfMemoryError错误,java.lang.OutOfMemoryError: GC overhead limit exceeded
异常详情:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
2016-08-24 17:35:58,927 (Flume Thrift IPC Thread 8) [ERROR - org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:196)] Error while writing to required channel: org.apache.flume.channel.MemoryChannel{name: memoryChannel} 2016-08-24 17:35:59,332 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - kafka.utils.Logging$class.error(Logging.scala:97)] Failed to collate messages by topic, partition due to: GC overhead limit exceeded 2016-08-24 17:35:59,332 (Flume Thrift IPC Thread 8) [ERROR - org.apache.thrift.ProcessFunction.process(ProcessFunction.java:41)] Internal error processing appendBatch java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:121) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:421) at java.lang.StringBuffer.append(StringBuffer.java:272) at java.io.StringWriter.write(StringWriter.java:112) at java.io.PrintWriter.write(PrintWriter.java:456) at java.io.PrintWriter.write(PrintWriter.java:473) at java.io.PrintWriter.print(PrintWriter.java:603) at java.io.PrintWriter.println(PrintWriter.java:756) at java.lang.Throwable$WrappedPrintWriter.println(Throwable.java:764) at java.lang.Throwable.printStackTrace(Throwable.java:658) at java.lang.Throwable.printStackTrace(Throwable.java:721) at org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:60) at org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87) at org.apache.log4j.spi.LoggingEvent.getThrowableStrRep(LoggingEvent.java:413) at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:313) at org.apache.log4j.WriterAppender.append(WriterAppender.java:162) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:576) at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:196) at org.apache.flume.source.ThriftSource$ThriftSourceHandler.appendBatch(ThriftSource.java:457) at org.apache.flume.thrift.ThriftSourceProtocol$Processor$appendBatch.getResult(ThriftSourceProtocol.java:259) at org.apache.flume.thrift.ThriftSourceProtocol$Processor$appendBatch.getResult(ThriftSourceProtocol.java:247) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:478) at org.apache.thrift.server.Invocation.run(Invocation.java:18) #显然是我采集的数据太大了,导致Flume的JVM内存不够用。 |
解决方案:
1、 vim bin/flume-ng
在里面找到JAVA_OPTS=”-Xmx20m”,它默认启动时最大可用内存为20,只要将其调大一点就可以了。
2、 或者在Flume conf目录下找到flume-env.sh.template文件
cp flume-env.sh.template flume-env.sh
vim flume-env.sh
把下面这句配置的注释删掉就可了
# export JAVA_OPTS=”-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote”
3、生产机结合具体吞吐量建议增大内存。比如说8192等。
二、org.apache.flume.EventDeliveryException: All sinks failed to process, nothing left to failover to ,提示sink优先级没配置,实际已经配置
异常详情:
1 2 3 4 5 6 7 8 |
org.apache.flume.EventDeliveryException: All sinks failed to process, nothing left to failover to at org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:191) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:745) 2014-07-08 12:12:26,574 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:206)] Rpc sink k2: Building RpcClient with hostname: 192.168.220.159, port: 44422 2014-07-08 12:12:26,575 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:126)] Attempting to create Avro Rpc client. 2014-07-08 12:12:26,577 (SinkRunner-PollingRunner-FailoverSinkProcessor) [WARN - org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:620)] Using default maxIOWorkers 2014-07-08 12:12:26,595 (SinkRunner-PollingRunner-FailoverSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows. |
解决方案:
1、这里写了,All sinks failed to process。从上面的测试可以表明,在配置failover的时候,sink的级别不能配置相同;如果配置多个相同级别的sink,只有一个生效
2、如果在已经配置了优先级的情况下,仍然提示。原因是重启flume这里只重启了agent导致报错!需要分别重启flume to hdfs整个链条的agent和server