Virtio-GPU Venus - The Mesa 3D Graphics Library

Mesa 3D
  • Home
  • News
  • Getting Started
  • Documentation
Virtio-GPU Venus¶

Venus is a Virtio-GPU protocol for Vulkan command serialization. The protocol definition and codegen are hosted at venus-protocol. The renderer is hosted at virglrenderer.

Requirements¶

The Venus renderer requires

  • Linux platform
    • Vulkan 1.1

    • VK_KHR_external_memory_fd

  • Android platform
    • Vulkan 1.1

    • VK_EXT_external_memory_dma_buf

    • VK_EXT_image_drm_format_modifier

    • VK_EXT_queue_family_foreign

from the host driver. However, it violates the spec and relies on implementation-defined behaviors to support vkMapMemory (see below). It is not expected to work on all drivers meeting the requirements. It has only been tested with:

  • ANV 21.1 or later

  • RADV 21.1 or later
    • Note: you need 6.13+ kernel that already has KVM: Stop grabbing references to PFNMAP’d pages.

    • Note: for dGPU paired with Intel CPU, you need 6.11+ kernel patched with KVM: VMX: Always honor guest PAT on CPUs that support self-snoop, or 6.16+ kernel with VMM to opt-out KVM_X86_QUIRK_IGNORE_GUEST_PAT (QEMU request is here).

  • Turnip 22.0 or later

  • PanVK 25.1 or later

  • Lavapipe 22.1 or later

  • Mali (Proprietary) r32p0 or later

  • NVIDIA (Proprietary) 570.86 or later
    • Note: if paired with Intel CPU, you need 6.11+ kernel patched with KVM: VMX: Always honor guest PAT on CPUs that support self-snoop, or 6.16+ kernel with VMM to opt-out KVM_X86_QUIRK_IGNORE_GUEST_PAT (QEMU request is here).

The Venus driver requires supports for

  • VIRTGPU_PARAM_3D_FEATURES

  • VIRTGPU_PARAM_CAPSET_QUERY_FIX

  • VIRTGPU_PARAM_RESOURCE_BLOB

  • VIRTGPU_PARAM_HOST_VISIBLE

  • VIRTGPU_PARAM_CONTEXT_INIT

from the virtio-gpu kernel driver, unless vtest is used. That usually means the guest kernel should be at least 5.16 or have the parameters back ported, paired with hypervisors such as crosvm, or QEMU.

vtest¶

The simplest way to test Venus is to use virglrenderer’s vtest server. To build virglrenderer with Venus support and to start the vtest server,

$gitclonehttps://gitlab.freedesktop.org/virgl/virglrenderer.git $cdvirglrenderer $mesonout-Dvenus=true $mesoncompile-Cout $mesondevenv-Cout $./vtest/virgl_test_server--venus $exit

In another shell,

$exportVK_DRIVER_FILES=<path-to-virtio_icd.x86_64.json> $exportVN_DEBUG=vtest $vulkaninfo $vkcube

If the host driver of the system is not new enough, it is a good idea to build the host driver as well when building the Venus driver. Just remember to set VK_DRIVER_FILES when starting the vtest server so that the vtest server finds the locally built host driver.

Virtio-GPU¶

The driver requires VIRTGPU_PARAM_CONTEXT_INIT from the virtio-gpu kernel driver, which was upstreamed in kernel 5.16.

crosvm is written in Rust. To build crosvm, make sure Rust has been installed and

$gitclone--recurse-submodules\ https://chromium.googlesource.com/chromiumos/platform/crosvm $cdcrosvm $RUSTFLAGS="-L<path-to-virglrenderer>/out/src"cargobuild\ --features"x wl-dmabuf virgl_renderer virgl_renderer_next default-no-sandbox"

Note that crosvm must be built with default-no-sandbox or started with --disable-sandbox in this setup.

This is how one might want to start crosvm

$sudoLD_LIBRARY_PATH=<...>VK_DRIVER_FILES=<...>./target/debug/crosvmrun\ --gpuvulkan=true\ --gpu-render-serverpath=<path-to-virglrenderer>/out/server/virgl_render_server\ --display-window-keyboard\ --display-window-mouse\ --net"host-ip 192.168.0.1,netmask=255.255.255.0,mac=12:34:56:78:9a:bc"\ --rwdiskdisk.img\ -proot=/dev/vda1\ <path-to-bzImage>

assuming a working system is installed to partition 1 of disk.img. sudo or CAP_NET_ADMIN is needed to set up the TAP network device.

Optional Requirements¶

When virglrenderer is built with -Dminigbm_allocation=true, the Venus renderer might need to import GBM BOs. The imports will fail unless the host driver supports the formats, especially multi-planar ones, and the DRM format modifiers of the GBM BOs.

In the future, if virglrenderer’s virgl_renderer_export_fence is supported, the Venus renderer will require VK_KHR_external_fence_fd with VK_EXTERNAL_FENCE_HANDLE_TYPE_SYNC_FD_BIT from the host driver.

VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT¶

The Venus renderer makes assumptions about VkDeviceMemory that has VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT. The assumptions are illegal and rely on the current behaviors of the host drivers. It should be possible to remove some of the assumptions and incrementally improve compatibilities with more host drivers by imposing platform-specific requirements. But the long-term plan is to create a new Vulkan extension for the host drivers to address this specific use case.

The Venus renderer assumes a device memory that has VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT can be exported as a mmapable dma-buf (in the future, the plan is to export the device memory as an opaque fd). It chains VkExportMemoryAllocateInfo to VkMemoryAllocateInfo without checking if the host driver can export the device memory.

The dma-buf is mapped (in the future, the plan is to import the opaque fd and call vkMapMemory) but the mapping is not accessed. Instead, the mapping is passed to KVM_SET_USER_MEMORY_REGION. The hypervisor, host KVM, and the guest kernel work together to set up a write-back or write-combined guest mapping (see virtio_gpu_vram_mmap of the virtio-gpu kernel driver). CPU accesses to the device memory are via the guest mapping, and are assumed to be coherent when the device memory also has VK_MEMORY_PROPERTY_HOST_COHERENT_BIT.

While the Venus renderer can force a VkDeviceMemory external, it does not force a VkImage or a VkBuffer external. As a result, it can bind an external device memory to a non-external resource.

Documentation

  • Introduction
  • Project History
  • Amber Branch
  • Platforms and Drivers
  • License and Copyright
  • Frequently Asked Questions
  • Release Notes

Download and Install

  • Downloading and Unpacking
  • Compiling and Installing
  • Precompiled Libraries

Need help?

  • Mailing Lists
  • Report a Bug

User Topics

  • Shading Language
  • EGL
  • OpenGL ES
  • Environment Variables
  • Performance Tips
  • GPU Performance Tracing
  • Mesa Extensions
  • Application Issues
  • Viewperf Issues
  • TensorFlow Lite delegate

Drivers

  • ANV
  • Asahi
  • D3D12
  • Freedreno
  • KosmicKrisp
  • Lima
  • LLVMpipe
  • NVK
  • Panfrost
  • PowerVR
  • RADV
  • VMware SVGA3D
  • V3D
  • VC4
  • Virtio-GPU Venus
    • Requirements
    • vtest
    • Virtio-GPU
    • Optional Requirements
    • VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT
  • VirGL
  • Zink
  • Xlib Software Driver

Developer Topics

  • Source Code Repository
  • Source Code Tree
  • Development Utilities
  • Help Wanted
  • Development Notes
  • Coding Style
  • Submitting Patches
  • Rust
  • Releasing Process
  • Release Calendar
  • Debugging GPU hangs, faults, and misrenderings
  • GL Dispatch
  • Gallium
  • Vulkan Runtime
  • NIR Intermediate Representation (NIR)
  • SPIR-V Debugging
  • Intel Surface Layout (ISL)
  • ISASPEC - XML Based ISA Specification
  • Rusticl
  • Android
  • Notes for macOS
  • Linux Kernel Drivers

Testing

  • Conformance Testing
  • Continuous Integration

Links

  • OpenGL Website
  • DRI Website
  • Developer Blogs

Từ khóa » Vulkan Vk_memory_property_host_visible_bit