Nouveau驱动程序如何在GPU端分配内存?

如何解决Nouveau驱动程序如何在GPU端分配内存?

我一直在研究Nouveau GPU驱动程序(用于Nvidia GPU的开源GPU驱动程序),因为我想了解驱动程序内部的实际情况。 具体来说,我一直在尝试弄清在 CPU端上运行的Nouveau代码如何能够在 GPU端上分配内存。
换句话说,我想学习cudaMalloc()如何在GPU端分配内存。(我知道cudaMalloc()是CUDA API,但是我不确定相当于CUDA的cudaMalloc()在Nouveau上)
到目前为止,这是我发现的内容:

  1. Nouveau的TTM(转换表管理器)通过int ttm_bo_init(...)初始化bo(缓冲区对象),其定义可以在here处找到,并由以下方式给出:
int ttm_bo_init(struct ttm_bo_device *bdev,struct ttm_buffer_object *bo,unsigned long size,enum ttm_bo_type type,struct ttm_placement *placement,uint32_t page_alignment,unsigned long buffer_start,bool interruptible,struct file *persistent_swap_storage,size_t acc_size,void (*destroy) (struct ttm_buffer_object *))
{
    int ret = 0;
    unsigned long num_pages;
    struct ttm_mem_global *mem_glob = bdev->glob->mem_glob;

    ret = ttm_mem_global_alloc(mem_glob,acc_size,false,false);
    if (ret) {
        printk(KERN_ERR TTM_PFX "Out of kernel memory.\n");
        if (destroy)
            (*destroy)(bo);
        else
            kfree(bo);
        return -ENOMEM;
    }

    size += buffer_start & ~PAGE_MASK;
    num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
    if (num_pages == 0) {
        printk(KERN_ERR TTM_PFX "Illegal buffer object size.\n");
        if (destroy)
            (*destroy)(bo);
        else
            kfree(bo);
        return -EINVAL;
    }
    bo->destroy = destroy;

    kref_init(&bo->kref);
    kref_init(&bo->list_kref);
    atomic_set(&bo->cpu_writers,0);
    atomic_set(&bo->reserved,1);
    init_waitqueue_head(&bo->event_queue);
    INIT_LIST_HEAD(&bo->lru);
    INIT_LIST_HEAD(&bo->ddestroy);
    INIT_LIST_HEAD(&bo->swap);
    INIT_LIST_HEAD(&bo->io_reserve_lru);
    bo->bdev = bdev;
    bo->glob = bdev->glob;
    bo->type = type;
    bo->num_pages = num_pages;
    bo->mem.size = num_pages << PAGE_SHIFT;
    bo->mem.mem_type = TTM_PL_SYSTEM;
    bo->mem.num_pages = bo->num_pages;
    bo->mem.mm_node = NULL;
    bo->mem.page_alignment = page_alignment;
    bo->mem.bus.io_reserved_vm = false;
    bo->mem.bus.io_reserved_count = 0;
    bo->buffer_start = buffer_start & PAGE_MASK;
    bo->priv_flags = 0;
    bo->mem.placement = (TTM_PL_FLAG_SYSTEM | TTM_PL_FLAG_CACHED);
    bo->seq_valid = false;
    bo->persistent_swap_storage = persistent_swap_storage;
    bo->acc_size = acc_size;
    atomic_inc(&bo->glob->bo_count);

    ret = ttm_bo_check_placement(bo,placement);
    if (unlikely(ret != 0))
        goto out_err;

    /*
     * For ttm_bo_type_device buffers,allocate
     * address space from the device.
     */
    if (bo->type == ttm_bo_type_device) {
        ret = ttm_bo_setup_vm(bo);
        if (ret)
            goto out_err;
    }

    ret = ttm_bo_validate(bo,placement,interruptible,false);
    if (ret)
        goto out_err;

    ttm_bo_unreserve(bo);
    return 0;

out_err:
    ttm_bo_unreserve(bo);
    ttm_bo_unref(&bo);

    return ret;
}

我认为对于用户空间应用程序,该缓冲区将通过ttm_bo_setup_vm()在代码的这一部分分配给设备(GPU)端:

    /*
     * For ttm_bo_type_device buffers,allocate
     * address space from the device.
     */
    if (bo->type == ttm_bo_type_device) {
        ret = ttm_bo_setup_vm(bo);
        if (ret)
            goto out_err;
    }

ttm_bo_setup_vm()被定义为:

/**
 * ttm_bo_setup_vm:
 *
 * @bo: the buffer to allocate address space for
 *
 * Allocate address space in the drm device so that applications
 * can mmap the buffer and access the contents. This only
 * applies to ttm_bo_type_device objects as others are not
 * placed in the drm device address space.
 */

static int ttm_bo_setup_vm(struct ttm_buffer_object *bo)
{
    struct ttm_bo_device *bdev = bo->bdev;
    int ret;

retry_pre_get:
    ret = drm_mm_pre_get(&bdev->addr_space_mm);
    if (unlikely(ret != 0))
        return ret;

    write_lock(&bdev->vm_lock);
    bo->vm_node = drm_mm_search_free(&bdev->addr_space_mm,bo->mem.num_pages,0);

    if (unlikely(bo->vm_node == NULL)) {
        ret = -ENOMEM;
        goto out_unlock;
    }

    bo->vm_node = drm_mm_get_block_atomic(bo->vm_node,0);

    if (unlikely(bo->vm_node == NULL)) {
        write_unlock(&bdev->vm_lock);
        goto retry_pre_get;
    }

    ttm_bo_vm_insert_rb(bo);
    write_unlock(&bdev->vm_lock);
    bo->addr_space_offset = ((uint64_t) bo->vm_node->start) << PAGE_SHIFT;

    return 0;
out_unlock:
    write_unlock(&bdev->vm_lock);
    return ret;
}

但是我很难找出锁定和解锁部分中的代码如何在GPU端分配内存。
另外,我偶然发现了将近10年前问过的this question,并想知道答案是否仍然有效-是否可以使用常规的malloc()free()进行分配并在GPU端释放内存。

我知道这个问题有些含糊,但我不知道我还能提供什么其他信息以使其更清楚。任何指示或想法将不胜感激。
预先感谢!

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


依赖报错 idea导入项目后依赖报错,解决方案:https://blog.csdn.net/weixin_42420249/article/details/81191861 依赖版本报错:更换其他版本 无法下载依赖可参考:https://blog.csdn.net/weixin_42628809/a
错误1:代码生成器依赖和mybatis依赖冲突 启动项目时报错如下 2021-12-03 13:33:33.927 ERROR 7228 [ main] o.s.b.d.LoggingFailureAnalysisReporter : *************************** APPL
错误1:gradle项目控制台输出为乱码 # 解决方案:https://blog.csdn.net/weixin_43501566/article/details/112482302 # 在gradle-wrapper.properties 添加以下内容 org.gradle.jvmargs=-Df
错误还原:在查询的过程中,传入的workType为0时,该条件不起作用 &lt;select id=&quot;xxx&quot;&gt; SELECT di.id, di.name, di.work_type, di.updated... &lt;where&gt; &lt;if test=&qu
报错如下,gcc版本太低 ^ server.c:5346:31: 错误:‘struct redisServer’没有名为‘server_cpulist’的成员 redisSetCpuAffinity(server.server_cpulist); ^ server.c: 在函数‘hasActiveC
解决方案1 1、改项目中.idea/workspace.xml配置文件,增加dynamic.classpath参数 2、搜索PropertiesComponent,添加如下 &lt;property name=&quot;dynamic.classpath&quot; value=&quot;tru
删除根组件app.vue中的默认代码后报错:Module Error (from ./node_modules/eslint-loader/index.js): 解决方案:关闭ESlint代码检测,在项目根目录创建vue.config.js,在文件中添加 module.exports = { lin
查看spark默认的python版本 [root@master day27]# pyspark /home/software/spark-2.3.4-bin-hadoop2.7/conf/spark-env.sh: line 2: /usr/local/hadoop/bin/hadoop: No s
使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-