You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Consume one message when loggie send one usually.
There may be three sitituations occur when one broker of kafka cluster(one cluster) is from down to up:
1、Lost the messages
2、Consume missed messages once
3、Concume missed messages little times
which related to the acks of kafka producer config.
Actual Behavior
Consume a lot of same datas when the above broker's state is up.
The broker which is down lasts about 3~4 hours,from 10:xx am to 13:50 pm.
The same datas are about from 1700 to 7600 items.
I think the quantity of same datas is determined by the time of the down broker or the request‘s frequency you send to loggie.
In my opinion, maybe the batch sending strategy causes this. The below data is from es.
Steps to Reproduce the Problem
I think you can stop one's kafka broker when you consume the data sending from loggie. Meanwhile, keep sending and keep consuming , last a while , maybe half hour or one hour . Then verify the data you consume. I advise that u can check the field timestamp in your data.
At last , I hope you can understand what i mean.
The text was updated successfully, but these errors were encountered:
panguangyao
changed the title
loggie will send plenty of same datas when kafka broker is from down to up
loggie will send plenty of same datas when kafka broker is down
Jun 15, 2022
What version of Loggie?
loggie:main-751105ba
Expected Behavior
Consume one message when loggie send one usually.
There may be three sitituations occur when one broker of kafka cluster(one cluster) is from down to up:
1、Lost the messages
2、Consume missed messages once
3、Concume missed messages little times
which related to the acks of kafka producer config.
Actual Behavior
Consume a lot of same datas when the above broker's state is up.
The broker which is down lasts about 3~4 hours,from 10:xx am to 13:50 pm.
The same datas are about from 1700 to 7600 items.
I think the quantity of same datas is determined by the time of the down broker or the request‘s frequency you send to loggie.
In my opinion, maybe the batch sending strategy causes this. The below data is from es.
Steps to Reproduce the Problem
I think you can stop one's kafka broker when you consume the data sending from loggie. Meanwhile, keep sending and keep consuming , last a while , maybe half hour or one hour . Then verify the data you consume. I advise that u can check the field timestamp in your data.
At last , I hope you can understand what i mean.
The text was updated successfully, but these errors were encountered: