Spring KafkaListener失败,无法进行休眠一对一单向映射

如何解决Spring KafkaListener失败,无法进行休眠一对一单向映射

我已经使用Spring数据JPA在Hibernate中建立了一对一关系模型。这是我的模特:

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import lombok.Data;
import org.hibernate.annotations.GenericGenerator;
import org.hibernate.annotations.Nationalized;
import org.hibernate.annotations.Parameter;

@Data
@Entity
@Table(name = "parent")
public class Parent
{
    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE,generator = "seq-gen")
    @GenericGenerator(name = "seq-gen",strategy = "org.hibernate.id.enhanced.SequenceStyleGenerator",parameters = { @Parameter(name = "sequence_name",value = "test_seq") })
    @Column(name = "id")
    private Long id;

    @Nationalized
    @Column(name = "Name",length = 50,nullable = false)
    private String name;
}

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.MapsId;
import javax.persistence.OneToOne;
import javax.persistence.Table;
import lombok.Data;
import org.hibernate.annotations.Nationalized;

@Data
@Entity
@Table(name="child")
public class Child
{
    @Id
    private Long id;

    @OneToOne
    @MapsId
    private Parent parent;

    @Nationalized
    @Column(name = "name",nullable = false)
    private String name;
}

子表与父表具有外键关系,父表本身使用@MapsId映射到子表的主键。请注意,这是单向关系。

我正在使用Spring data jpa的JpaRepository将对象持久存储到数据库中。

public interface ParentRepository extends JpaRepository<Parent,Long>{}

public interface ChildRepository extends JpaRepository<Child,Long>{}

在Service类内部,我有一个简单的方法,该方法首先持久化Parent对象,然后再持久化Child对象。

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class ServiceImpl
{
    @Autowired
    private ParentRepository parentRepository;

    @Autowired
    private ChildRepository childRepository;

    public void create()
    {
        Parent parent = new Parent();
        parent.setName("Parent");
        parent = parentRepository.save(parent);

        System.out.println(parent.getId());

        Child child = new Child();
        child.setName("Child");
        child.setParent(parent);
        childRepository.save(child);     // This line fails when using ControllerKafka
    }

}

我创建了一个RESTful控制器,该控制器具有自动连接的ServiceImpl对象,它调用Service类的适当方法。

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.ResponseStatus;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class ControllerRestful
{
    @Autowired
    private ServiceImpl service;

    @PostMapping(value = "/test")
    @ResponseStatus(HttpStatus.OK)
    public void create(@RequestBody UserDTO dto)
    {
        System.out.println(dto);
        service.create();
        System.out.println("Done");
    }
}

请注意,Service类独立于Controller。我在本地测试了该终结点,并将其父对象和子对象持久保存到数据库中。

现在,当我切换到基于kafka的控制器时,它在持久化时会引发错误。

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.messaging.handler.annotation.SendTo;
import org.springframework.stereotype.Controller;

@Controller
public class ControllerKafka
{
    @Autowired
    private ServiceImpl service;

    @KafkaListener(topics = "..",groupId = "..")
    @SendTo
    public void create(UserDTO dto,Acknowledgment acknowledgment)
    {
        service.create();
        System.out.println("Done");
        acknowledgment.acknowledge();
    }
}

错误日志:

org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void ControllerKafka.create(UserDTO,org.springframework.kafka.support.Acknowledgment)' threw exception; 
nested exception is org.springframework.dao.InvalidDataAccessApiUsageException: detached entity passed to persist: models.Parent;
nested exception is org.hibernate.PersistentObjectException: detached entity passed to persist: models.Parent; 
nested exception is org.springframework.dao.InvalidDataAccessApiUsageException: detached entity passed to persist: models.Parent; 
nested exception is org.hibernate.PersistentObjectException: detached entity passed to persist: models.Parent
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.decorateException(KafkaMessageListenerContainer.java:1899) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeErrorHandler(KafkaMessageListenerContainer.java:1887) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1792) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1719) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1617) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1348) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1064) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:972) ~[spring-kafka-2.5.2.RELEASE.jar:2.5.2.RELEASE]
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
        at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

Child情况下在ServiceImpl内持久保存ControllerKafka对象,而在ControllerRestful中怎么可能失败?同样,如果我从@MapsId中删除了Child属性,并且对Child实体使用了不同的主键和外键,那么持久化时就不会出错。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-