如何解决Spring Batch:仅当使用者作业在生产者作业之前启动并且指定了正确的分区时,KafkaItemReader才起作用
我有一个简单的Spring Batch作业,该作业从具有3个分区的Kafka Topic中读取。我有以下发现/问题:
-
如果在将消息发布到主题后 开始了消费者作业,则消费者作业将无限期等待消息。使用者仅在首先启动然后再向该主题生成消息的情况下才使用消息。在现实世界中,我迫不及待要发布消息,然后开始执行消费者工作。我该如何解决?
-
我的主题有4个分区,但是只有当我向读者提供分区0、1和2时,使用者才能工作。如果我还提供了分区3,则消费者waitis infinitley和somtimes还会引发以下异常:
Exception is......................... : org.apache.kafka.common.errors.TimeoutException: Timeout of 60000ms expired before successfully committing the current consumed offsets org.apache.kafka.common.errors.TimeoutException: Timeout of 60000ms expired before successfully committing the current consumed offsets
消费者作业配置:
@Configuration
@EnableBatchProcessing
public class BatchConfiguration {
@Autowired
private JobBuilderFactory jobBuilderFactory;
@Autowired
private StepBuilderFactory stepBuilderFactory;
@Bean
public Job job() {
return jobBuilderFactory.get("job").incrementer(new RunIdIncrementer())
.start(testFileWritingStep()).build();
}
@Bean
public Step testFileWritingStep() {
return stepBuilderFactory.get("testFileWritingStep").<String,String>chunk(10)
.reader(testKafkaItemReader()).writer(testFileWriter()).build();
}
@Bean
public KafkaItemReader<String,String> testKafkaItemReader() {
Properties props = new Properties();
//not providing the actual broker hosts and ports on stackoverfow for security reasons..
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"somebroker:someport,somebroker:someport,somebroker:someport");
props.put(ConsumerConfig.GROUP_ID_CONFIG,"mygroup");
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,"SSL");
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,"src/main/resources/conf/trust.jks");
props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG,"pass");
props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG,"src/main/resources/conf/loc.jks");
props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG,"pass");
props.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG,"pass");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
return new KafkaItemReaderBuilder<String,String>().partitions(0,1,2).consumerProperties(props)
.name("myreader").saveState(true).topic("mytopic").build();
}
@Bean
public FlatFileItemWriter<String> testFileWriter() {
FlatFileItemWriter<String> writer = new FlatFileItemWriter<>();
writer.setResource(new FileSystemResource(
"I:/CK/data/output.dat"));
writer.setAppendAllowed(false);
writer.setShouldDeleteIfExists(true);
DelimitedLineAggregator<String> lineAggregator = new DelimitedLineAggregator<>();
lineAggregator.setDelimiter(",");
writer.setLineAggregator(lineAggregator);
return writer;
}
}
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。