flume接收kafka消息过多导致的GC错误的解决办法

阅读: 评论:0

flume接收kafka消息过多导致的GC错误的解决办法

flume接收kafka消息过多导致的GC错误的解决办法

当flume接收kafka消息过多会导致如下错误:

Exception in thread "PollableSourceRunner-KafkaSource-s1" java.lang.OutOfMemoryError: GC overhead limit exceededat java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)at java.lang.StringCoding.decode(StringCoding.java:193)at java.lang.String.<init>(String.java:426)at java.lang.String.<init>(String.java:491)at org.apache.kafkamon.serialization.StringDeserializer.deserialize(StringDeserializer.java:47)at org.apache.kafkamon.serialization.StringDeserializer.deserialize(StringDeserializer.java:28)at org.apache.kafkamon.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:65)at org.apache.kafkamon.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:55)at org.apache.sumer.internals.Fetcher.parseRecord(Fetcher.java:1038)at org.apache.sumer.internals.Fetcher.access$3300(Fetcher.java:110)at org.apache.sumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1223)at org.apache.sumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:1072)at org.apache.sumer.internals.Fetcher.fetchRecords(Fetcher.java:562)at org.apache.sumer.internals.Fetcher.fetchedRecords(Fetcher.java:523)at org.apache.sumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1230)at org.apache.sumer.KafkaConsumer.poll(KafkaConsumer.java:1187)at org.apache.sumer.KafkaConsumer.poll(KafkaConsumer.java:1154)at org.apache.flume.source.kafka.KafkaSource.doProcess(KafkaSource.java:216)at org.apache.flume.source.AbstractPollableSource.process(AbstractPollableSource.java:60)at org.apache.flume.source.PollableSourceRunner$PollingRunner.run(PollableSourceRunner.java:133)at java.lang.Thread.run(Thread.java:748)
[2020-04-22 21:52:50,985] WARN Sink failed to consume event. Attempting next sink if available. (org.apache.flume.sink.LoadBalancingSinkProcessor)
org.apache.flume.EventDeliveryException: Failed to send eventsat org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:398)at org.apache.flume.sink.LoadBalancingSinkProcessor.process(LoadBalancingSinkProcessor.java:156)at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: slave1, port: 52020 }: Failed to send batchat org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:310)at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:380)... 3 more
Caused by: org.apache.flume.EventDeliveryException: NettyAvroRpcClient { host: slave1, port: 52020 }: RPC request exceptionat org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:360)at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:298)... 4 more
Caused by: urrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceededat port(FutureTask.java:122)at (FutureTask.java:206)at org.apache.flume.api.NettyAvroRpcClient.appendBatch(NettyAvroRpcClient.java:352)... 5 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceededat java.util.ArrayList.iterator(ArrayList.java:840)at org.ic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:103)at org.ic.GenericDatumWriter.write(GenericDatumWriter.java:66)at org.ic.GenericDatumWriter.writeArray(GenericDatumWriter.java:131)at org.ic.GenericDatumWriter.write(GenericDatumWriter.java:68)at org.ic.GenericDatumWriter.write(GenericDatumWriter.java:58)at org.apache.avro.ipc.specific.SpecificRequestor.writeRequest(SpecificRequestor.java:127)at org.apache.avro.ipc.Bytes(Requestor.java:473)at org.apache.avro.quest(Requestor.java:181)at org.apache.avro.quest(Requestor.java:129)at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:84)at com.sun.proxy.$Proxy6.appendBatch(Unknown Source)at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:343)at org.apache.flume.api.NettyAvroRpcClient$2.call(NettyAvroRpcClient.java:339)at urrent.FutureTask.run(FutureTask.java:266)at urrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at urrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)... 1 more
[2020-04-22 21:52:53,746] INFO Rpc sink k1: Building RpcClient with hostname: slave1, port: 52020 (org.apache.flume.sink.AbstractRpcSink)

如下图所示:

或者:

Exception in thread "PollableSourceRunner-KafkaSource-s1" java.lang.OutOfMemoryError: GC overhead limit exceededException: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "PollableSourceRunner-KafkaSource-s1"
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.OutOfMemoryError: GC overhead limit exceeded

如下图所示:

出现以上这种情况主要是由于flume的jvm设置的最大内存默认为20M,只需要改一下这个值即可,如下图所示:

本文发布于:2024-02-04 09:21:33,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170703987554329.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:解决办法   错误   消息   flume   kafka
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23