Merge drm/drm-next into drm-misc-next
Noralf needs some patches in 5.12-rc3, and we've been delaying the 5.12
merge due to the swap issue so it looks like a good time.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
diff --git a/Documentation/devicetree/bindings/display/panel/panel-simple.yaml b/Documentation/devicetree/bindings/display/panel/panel-simple.yaml
index 62b0d54..b3797ba 100644
--- a/Documentation/devicetree/bindings/display/panel/panel-simple.yaml
+++ b/Documentation/devicetree/bindings/display/panel/panel-simple.yaml
@@ -161,6 +161,8 @@
# Innolux Corporation 12.1" G121X1-L03 XGA (1024x768) TFT LCD panel
- innolux,g121x1-l03
# Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
+ - innolux,n116bca-ea1
+ # Innolux Corporation 11.6" WXGA (1366x768) TFT LCD panel
- innolux,n116bge
# InnoLux 13.3" FHD (1920x1080) eDP TFT LCD panel
- innolux,n125hce-gn1
diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst
index a2133d6..7f37ec3 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,79 @@
userspace is allowed to use userspace fencing or long running compute
workloads. This also means no implicit fencing for shared buffers in these
cases.
+
+Recoverable Hardware Page Faults Implications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linux rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. Specifically this means implicit synchronization will not be possible.
+The exception is when page faults are only used as migration hints and never to
+on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+ hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+ allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few options to prevent this problem, one of which drivers need to
+ensure:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+ and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+ independent hardware resources to guarantee forward progress. This could be
+ achieved through e.g. through dedicated engines and minimal compute unit
+ reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only reserving the
+ hardware resources for DMA fence workloads when they are in-flight. This must
+ cover the time from when the DMA fence is visible to other threads up to
+ moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation mechanics,
+ all workloads must be flushed from the GPU when switching between jobs
+ requiring DMA fences or jobs requiring page fault handling: This means all DMA
+ fences must complete before a compute job with page fault handling can be
+ inserted into the scheduler queue. And vice versa, before a DMA fence can be
+ made visible anywhere in the system, all compute workloads must be preempted
+ to guarantee all pending GPU page faults are flushed.
+
+- Only a fairly theoretical option would be to untangle these dependencies when
+ allocating memory to repair hardware page faults, either through separate
+ memory blocks or runtime tracking of the full dependency graph of all DMA
+ fences. This results very wide impact on the kernel, since resolving the page
+ on the CPU side can itself involve a page fault. It is much more feasible and
+ robust to limit the impact of handling hardware page faults to the specific
+ driver.
+
+Note that workloads that run on independent hardware like copy engines or other
+GPUs do not have any impact. This allows us to keep using DMA fences internally
+in the kernel even for resolving hardware page faults, e.g. by using copy
+engines to clear or copy memory needed to resolve the page fault.
+
+In some ways this page fault problem is a special case of the `Infinite DMA
+Fences` discussions: Infinite fences from compute workloads are allowed to
+depend on DMA fences, but not the other way around. And not even the page fault
+problem is new, because some other CPU thread in userspace might
+hit a page fault which holds up a userspace fence - supporting page faults on
+GPUs doesn't anything fundamentally new.
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 1b4b64b..7ff9fac 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -677,7 +677,7 @@
Convert fbdev drivers to DRM
----------------------------
-There are plenty of fbdev drivers for older hardware. Some hwardware has
+There are plenty of fbdev drivers for older hardware. Some hardware has
become obsolete, but some still provides good(-enough) framebuffers. The
drivers that are still useful should be converted to DRM and afterwards
removed from fbdev.
diff --git a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
index 54ee2cb..d60e1ee 100644
--- a/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
+++ b/drivers/gpu/drm/ingenic/ingenic-drm-drv.c
@@ -561,7 +561,7 @@ static void ingenic_drm_plane_atomic_update(struct drm_plane *plane,
height = newstate->src_h >> 16;
cpp = newstate->fb->format->cpp[0];
- if (priv->soc_info->has_osd && plane->type == DRM_PLANE_TYPE_OVERLAY)
+ if (!priv->soc_info->has_osd || plane->type == DRM_PLANE_TYPE_OVERLAY)
hwdesc = &priv->dma_hwdescs->hwdesc_f0;
else
hwdesc = &priv->dma_hwdescs->hwdesc_f1;
@@ -833,6 +833,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
const struct jz_soc_info *soc_info;
struct ingenic_drm *priv;
struct clk *parent_clk;
+ struct drm_plane *primary;
struct drm_bridge *bridge;
struct drm_panel *panel;
struct drm_encoder *encoder;
@@ -947,9 +948,11 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
if (soc_info->has_osd)
priv->ipu_plane = drm_plane_from_index(drm, 0);
- drm_plane_helper_add(&priv->f1, &ingenic_drm_plane_helper_funcs);
+ primary = priv->soc_info->has_osd ? &priv->f1 : &priv->f0;
- ret = drm_universal_plane_init(drm, &priv->f1, 1,
+ drm_plane_helper_add(primary, &ingenic_drm_plane_helper_funcs);
+
+ ret = drm_universal_plane_init(drm, primary, 1,
&ingenic_drm_primary_plane_funcs,
priv->soc_info->formats_f1,
priv->soc_info->num_formats_f1,
@@ -961,7 +964,7 @@ static int ingenic_drm_bind(struct device *dev, bool has_components)
drm_crtc_helper_add(&priv->crtc, &ingenic_drm_crtc_helper_funcs);
- ret = drm_crtc_init_with_planes(drm, &priv->crtc, &priv->f1,
+ ret = drm_crtc_init_with_planes(drm, &priv->crtc, primary,
NULL, &ingenic_drm_crtc_funcs, NULL);
if (ret) {
dev_err(dev, "Failed to init CRTC: %i\n", ret);
diff --git a/drivers/gpu/drm/mcde/mcde_dsi.c b/drivers/gpu/drm/mcde/mcde_dsi.c
index 2314c81..b3fd350 100644
--- a/drivers/gpu/drm/mcde/mcde_dsi.c
+++ b/drivers/gpu/drm/mcde/mcde_dsi.c
@@ -760,7 +760,7 @@ static void mcde_dsi_start(struct mcde_dsi *d)
DSI_MCTL_MAIN_DATA_CTL_BTA_EN |
DSI_MCTL_MAIN_DATA_CTL_READ_EN |
DSI_MCTL_MAIN_DATA_CTL_REG_TE_EN;
- if (d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET)
+ if (!(d->mdsi->mode_flags & MIPI_DSI_MODE_EOT_PACKET))
val |= DSI_MCTL_MAIN_DATA_CTL_HOST_EOT_GEN;
writel(val, d->regs + DSI_MCTL_MAIN_DATA_CTL);
diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
index b9a0e56..ef70140 100644
--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
@@ -898,8 +898,7 @@ static int nt35510_probe(struct mipi_dsi_device *dsi)
*/
dsi->hs_rate = 349440000;
dsi->lp_rate = 9600000;
- dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
- MIPI_DSI_MODE_EOT_PACKET;
+ dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
/*
* Every new incarnation of this display must have a unique
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
index 4aac0d1..70560ca 100644
--- a/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
+++ b/drivers/gpu/drm/panel/panel-samsung-s6d16d0.c
@@ -184,9 +184,7 @@ static int s6d16d0_probe(struct mipi_dsi_device *dsi)
* As we only send commands we do not need to be continuously
* clocked.
*/
- dsi->mode_flags =
- MIPI_DSI_CLOCK_NON_CONTINUOUS |
- MIPI_DSI_MODE_EOT_PACKET;
+ dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
s6->supply = devm_regulator_get(dev, "vdd1");
if (IS_ERR(s6->supply))
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
index eec74c1..9c3563c 100644
--- a/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e63m0-dsi.c
@@ -97,7 +97,6 @@ static int s6e63m0_dsi_probe(struct mipi_dsi_device *dsi)
dsi->hs_rate = 349440000;
dsi->lp_rate = 9600000;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO |
- MIPI_DSI_MODE_EOT_PACKET |
MIPI_DSI_MODE_VIDEO_BURST;
ret = s6e63m0_probe(dev, s6e63m0_dsi_dcs_read, s6e63m0_dsi_dcs_write,
diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
index 9858079..be312b5 100644
--- a/drivers/gpu/drm/panel/panel-simple.c
+++ b/drivers/gpu/drm/panel/panel-simple.c
@@ -376,12 +376,13 @@ static int panel_simple_get_hpd_gpio(struct device *dev,
return 0;
}
-static int panel_simple_prepare(struct drm_panel *panel)
+static int panel_simple_prepare_once(struct drm_panel *panel)
{
struct panel_simple *p = to_panel_simple(panel);
unsigned int delay;
int err;
int hpd_asserted;
+ unsigned long hpd_wait_us;
if (p->prepared_time != 0)
return 0;
@@ -406,25 +407,63 @@ static int panel_simple_prepare(struct drm_panel *panel)
if (IS_ERR(p->hpd_gpio)) {
err = panel_simple_get_hpd_gpio(panel->dev, p, false);
if (err)
- return err;
+ goto error;
}
+ if (p->desc->delay.hpd_absent_delay)
+ hpd_wait_us = p->desc->delay.hpd_absent_delay * 1000UL;
+ else
+ hpd_wait_us = 2000000;
+
err = readx_poll_timeout(gpiod_get_value_cansleep, p->hpd_gpio,
hpd_asserted, hpd_asserted,
- 1000, 2000000);
+ 1000, hpd_wait_us);
if (hpd_asserted < 0)
err = hpd_asserted;
if (err) {
- dev_err(panel->dev,
- "error waiting for hpd GPIO: %d\n", err);
- return err;
+ if (err != -ETIMEDOUT)
+ dev_err(panel->dev,
+ "error waiting for hpd GPIO: %d\n", err);
+ goto error;
}
}
p->prepared_time = ktime_get();
return 0;
+
+error:
+ gpiod_set_value_cansleep(p->enable_gpio, 0);
+ regulator_disable(p->supply);
+ p->unprepared_time = ktime_get();
+
+ return err;
+}
+
+/*
+ * Some panels simply don't always come up and need to be power cycled to
+ * work properly. We'll allow for a handful of retries.
+ */
+#define MAX_PANEL_PREPARE_TRIES 5
+
+static int panel_simple_prepare(struct drm_panel *panel)
+{
+ int ret;
+ int try;
+
+ for (try = 0; try < MAX_PANEL_PREPARE_TRIES; try++) {
+ ret = panel_simple_prepare_once(panel);
+ if (ret != -ETIMEDOUT)
+ break;
+ }
+
+ if (ret == -ETIMEDOUT)
+ dev_err(panel->dev, "Prepare timeout after %d tries\n", try);
+ else if (try)
+ dev_warn(panel->dev, "Prepare needed %d retries\n", try);
+
+ return ret;
}
static int panel_simple_enable(struct drm_panel *panel)
@@ -1445,6 +1484,7 @@ static const struct panel_desc boe_nv110wtm_n61 = {
.delay = {
.hpd_absent_delay = 200,
.prepare_to_enable = 80,
+ .enable = 50,
.unprepare = 500,
},
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
@@ -2368,6 +2408,36 @@ static const struct panel_desc innolux_g121x1_l03 = {
},
};
+static const struct drm_display_mode innolux_n116bca_ea1_mode = {
+ .clock = 76420,
+ .hdisplay = 1366,
+ .hsync_start = 1366 + 136,
+ .hsync_end = 1366 + 136 + 30,
+ .htotal = 1366 + 136 + 30 + 60,
+ .vdisplay = 768,
+ .vsync_start = 768 + 8,
+ .vsync_end = 768 + 8 + 12,
+ .vtotal = 768 + 8 + 12 + 12,
+ .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC,
+};
+
+static const struct panel_desc innolux_n116bca_ea1 = {
+ .modes = &innolux_n116bca_ea1_mode,
+ .num_modes = 1,
+ .bpc = 6,
+ .size = {
+ .width = 256,
+ .height = 144,
+ },
+ .delay = {
+ .hpd_absent_delay = 200,
+ .prepare_to_enable = 80,
+ .unprepare = 500,
+ },
+ .bus_format = MEDIA_BUS_FMT_RGB666_1X18,
+ .connector_type = DRM_MODE_CONNECTOR_eDP,
+};
+
/*
* Datasheet specifies that at 60 Hz refresh rate:
* - total horizontal time: { 1506, 1592, 1716 }
@@ -4284,6 +4354,9 @@ static const struct of_device_id platform_of_match[] = {
.compatible = "innolux,g121x1-l03",
.data = &innolux_g121x1_l03,
}, {
+ .compatible = "innolux,n116bca-ea1",
+ .data = &innolux_n116bca_ea1,
+ }, {
.compatible = "innolux,n116bge",
.data = &innolux_n116bge,
}, {
diff --git a/drivers/gpu/drm/panel/panel-sony-acx424akp.c b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
index 065efae..95659a4 100644
--- a/drivers/gpu/drm/panel/panel-sony-acx424akp.c
+++ b/drivers/gpu/drm/panel/panel-sony-acx424akp.c
@@ -449,8 +449,7 @@ static int acx424akp_probe(struct mipi_dsi_device *dsi)
MIPI_DSI_MODE_VIDEO_BURST;
else
dsi->mode_flags =
- MIPI_DSI_CLOCK_NON_CONTINUOUS |
- MIPI_DSI_MODE_EOT_PACKET;
+ MIPI_DSI_CLOCK_NON_CONTINUOUS;
acx->supply = devm_regulator_get(dev, "vddi");
if (IS_ERR(acx->supply))
diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
index 92d965b..f0790e94 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -453,7 +453,7 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
struct drm_gpu_scheduler *sched;
struct drm_sched_rq *rq;
- if (spsc_queue_count(&entity->job_queue) || entity->num_sched_list <= 1)
+ if (spsc_queue_count(&entity->job_queue) || !entity->sched_list)
return;
fence = READ_ONCE(entity->last_scheduled);
@@ -467,8 +467,10 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
drm_sched_rq_remove_entity(entity->rq, entity);
entity->rq = rq;
}
-
spin_unlock(&entity->rq_lock);
+
+ if (entity->num_sched_list == 1)
+ entity->sched_list = NULL;
}
/**
diff --git a/drivers/gpu/drm/stm/dw_mipi_dsi-stm.c b/drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
index 2e1f266..8399d33 100644
--- a/drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
+++ b/drivers/gpu/drm/stm/dw_mipi_dsi-stm.c
@@ -363,8 +363,7 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
dsi->vdd_supply = devm_regulator_get(dev, "phy-dsi");
if (IS_ERR(dsi->vdd_supply)) {
ret = PTR_ERR(dsi->vdd_supply);
- if (ret != -EPROBE_DEFER)
- DRM_ERROR("Failed to request regulator: %d\n", ret);
+ dev_err_probe(dev, ret, "Failed to request regulator\n");
return ret;
}
@@ -377,9 +376,7 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
dsi->pllref_clk = devm_clk_get(dev, "ref");
if (IS_ERR(dsi->pllref_clk)) {
ret = PTR_ERR(dsi->pllref_clk);
- if (ret != -EPROBE_DEFER)
- DRM_ERROR("Unable to get pll reference clock: %d\n",
- ret);
+ dev_err_probe(dev, ret, "Unable to get pll reference clock\n");
goto err_clk_get;
}
@@ -419,7 +416,7 @@ static int dw_mipi_dsi_stm_probe(struct platform_device *pdev)
dsi->dsi = dw_mipi_dsi_probe(pdev, &dw_mipi_dsi_stm_plat_data);
if (IS_ERR(dsi->dsi)) {
ret = PTR_ERR(dsi->dsi);
- DRM_ERROR("Failed to initialize mipi dsi host: %d\n", ret);
+ dev_err_probe(dev, ret, "Failed to initialize mipi dsi host\n");
goto err_dsi_probe;
}
diff --git a/drivers/gpu/drm/stm/ltdc.c b/drivers/gpu/drm/stm/ltdc.c
index b5117fc..65c3c79 100644
--- a/drivers/gpu/drm/stm/ltdc.c
+++ b/drivers/gpu/drm/stm/ltdc.c
@@ -31,6 +31,7 @@
#include <drm/drm_of.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_probe_helper.h>
+#include <drm/drm_simple_kms_helper.h>
#include <drm/drm_vblank.h>
#include <video/videomode.h>
@@ -1054,14 +1055,6 @@ static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc)
return ret;
}
-/*
- * DRM_ENCODER
- */
-
-static const struct drm_encoder_funcs ltdc_encoder_funcs = {
- .destroy = drm_encoder_cleanup,
-};
-
static void ltdc_encoder_disable(struct drm_encoder *encoder)
{
struct drm_device *ddev = encoder->dev;
@@ -1122,8 +1115,7 @@ static int ltdc_encoder_init(struct drm_device *ddev, struct drm_bridge *bridge)
encoder->possible_crtcs = CRTC_MASK;
encoder->possible_clones = 0; /* No cloning support */
- drm_encoder_init(ddev, encoder, <dc_encoder_funcs,
- DRM_MODE_ENCODER_DPI, NULL);
+ drm_simple_encoder_init(ddev, encoder, DRM_MODE_ENCODER_DPI);
drm_encoder_helper_add(encoder, <dc_encoder_helper_funcs);
diff --git a/drivers/gpu/drm/vboxvideo/vbox_ttm.c b/drivers/gpu/drm/vboxvideo/vbox_ttm.c
index 0066a3c..fd8a53a 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_ttm.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_ttm.c
@@ -12,15 +12,13 @@
int vbox_mm_init(struct vbox_private *vbox)
{
- struct drm_vram_mm *vmm;
int ret;
struct drm_device *dev = &vbox->ddev;
struct pci_dev *pdev = to_pci_dev(dev->dev);
- vmm = drm_vram_helper_alloc_mm(dev, pci_resource_start(pdev, 0),
+ ret = drmm_vram_helper_init(dev, pci_resource_start(pdev, 0),
vbox->available_vram_size);
- if (IS_ERR(vmm)) {
- ret = PTR_ERR(vmm);
+ if (ret) {
DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
return ret;
}
@@ -33,5 +31,4 @@ int vbox_mm_init(struct vbox_private *vbox)
void vbox_mm_fini(struct vbox_private *vbox)
{
arch_phys_wc_del(vbox->fb_mtrr);
- drm_vram_helper_release_mm(&vbox->ddev);
}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 23eb6d7..669f2ee 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -174,7 +174,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
if (!sync_file) {
dma_fence_put(&out_fence->f);
ret = -ENOMEM;
- goto out_memdup;
+ goto out_unresv;
}
exbuf->fence_fd = out_fence_fd;
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index d69a5b6..4ff1ec2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -248,6 +248,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
if (ret != 0) {
+ virtio_gpu_array_put_free(objs);
virtio_gpu_free_object(&shmem_obj->base);
return ret;
}
diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
index a591d291..b292887 100644
--- a/drivers/video/fbdev/core/fb_defio.c
+++ b/drivers/video/fbdev/core/fb_defio.c
@@ -52,13 +52,6 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf)
return VM_FAULT_SIGBUS;
get_page(page);
-
- if (vmf->vma->vm_file)
- page->mapping = vmf->vma->vm_file->f_mapping;
- else
- printk(KERN_ERR "no mapping available\n");
-
- BUG_ON(!page->mapping);
page->index = vmf->pgoff;
vmf->page = page;
@@ -151,17 +144,6 @@ static const struct vm_operations_struct fb_deferred_io_vm_ops = {
.page_mkwrite = fb_deferred_io_mkwrite,
};
-static int fb_deferred_io_set_page_dirty(struct page *page)
-{
- if (!PageDirty(page))
- SetPageDirty(page);
- return 0;
-}
-
-static const struct address_space_operations fb_deferred_io_aops = {
- .set_page_dirty = fb_deferred_io_set_page_dirty,
-};
-
int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
{
vma->vm_ops = &fb_deferred_io_vm_ops;
@@ -212,29 +194,12 @@ void fb_deferred_io_init(struct fb_info *info)
}
EXPORT_SYMBOL_GPL(fb_deferred_io_init);
-void fb_deferred_io_open(struct fb_info *info,
- struct inode *inode,
- struct file *file)
-{
- file->f_mapping->a_ops = &fb_deferred_io_aops;
-}
-EXPORT_SYMBOL_GPL(fb_deferred_io_open);
-
void fb_deferred_io_cleanup(struct fb_info *info)
{
struct fb_deferred_io *fbdefio = info->fbdefio;
- struct page *page;
- int i;
BUG_ON(!fbdefio);
cancel_delayed_work_sync(&info->deferred_work);
-
- /* clear out the mapping that we setup */
- for (i = 0 ; i < info->fix.smem_len; i += PAGE_SIZE) {
- page = fb_deferred_io_page(info, i);
- page->mapping = NULL;
- }
-
mutex_destroy(&fbdefio->lock);
}
EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
index 06f5805de..372b52a 100644
--- a/drivers/video/fbdev/core/fbmem.c
+++ b/drivers/video/fbdev/core/fbmem.c
@@ -1415,10 +1415,6 @@ __releases(&info->lock)
if (res)
module_put(info->fbops->owner);
}
-#ifdef CONFIG_FB_DEFERRED_IO
- if (info->fbdefio)
- fb_deferred_io_open(info, inode, file);
-#endif
out:
unlock_fb_info(info);
if (res)
diff --git a/include/linux/fb.h b/include/linux/fb.h
index ecfbcc0..a8dccd2 100644
--- a/include/linux/fb.h
+++ b/include/linux/fb.h
@@ -659,9 +659,6 @@ static inline void __fb_pad_aligned_buffer(u8 *dst, u32 d_pitch,
/* drivers/video/fb_defio.c */
int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma);
extern void fb_deferred_io_init(struct fb_info *info);
-extern void fb_deferred_io_open(struct fb_info *info,
- struct inode *inode,
- struct file *file);
extern void fb_deferred_io_cleanup(struct fb_info *info);
extern int fb_deferred_io_fsync(struct file *file, loff_t start,
loff_t end, int datasync);
diff --git a/include/uapi/drm/drm.h b/include/uapi/drm/drm.h
index 0827037c..67b94bc 100644
--- a/include/uapi/drm/drm.h
+++ b/include/uapi/drm/drm.h
@@ -625,30 +625,147 @@ struct drm_gem_open {
__u64 size;
};
+/**
+ * DRM_CAP_DUMB_BUFFER
+ *
+ * If set to 1, the driver supports creating dumb buffers via the
+ * &DRM_IOCTL_MODE_CREATE_DUMB ioctl.
+ */
#define DRM_CAP_DUMB_BUFFER 0x1
+/**
+ * DRM_CAP_VBLANK_HIGH_CRTC
+ *
+ * If set to 1, the kernel supports specifying a CRTC index in the high bits of
+ * &drm_wait_vblank_request.type.
+ *
+ * Starting kernel version 2.6.39, this capability is always set to 1.
+ */
#define DRM_CAP_VBLANK_HIGH_CRTC 0x2
+/**
+ * DRM_CAP_DUMB_PREFERRED_DEPTH
+ *
+ * The preferred bit depth for dumb buffers.
+ *
+ * The bit depth is the number of bits used to indicate the color of a single
+ * pixel excluding any padding. This is different from the number of bits per
+ * pixel. For instance, XRGB8888 has a bit depth of 24 but has 32 bits per
+ * pixel.
+ *
+ * Note that this preference only applies to dumb buffers, it's irrelevant for
+ * other types of buffers.
+ */
#define DRM_CAP_DUMB_PREFERRED_DEPTH 0x3
+/**
+ * DRM_CAP_DUMB_PREFER_SHADOW
+ *
+ * If set to 1, the driver prefers userspace to render to a shadow buffer
+ * instead of directly rendering to a dumb buffer. For best speed, userspace
+ * should do streaming ordered memory copies into the dumb buffer and never
+ * read from it.
+ *
+ * Note that this preference only applies to dumb buffers, it's irrelevant for
+ * other types of buffers.
+ */
#define DRM_CAP_DUMB_PREFER_SHADOW 0x4
+/**
+ * DRM_CAP_PRIME
+ *
+ * Bitfield of supported PRIME sharing capabilities. See &DRM_PRIME_CAP_IMPORT
+ * and &DRM_PRIME_CAP_EXPORT.
+ *
+ * PRIME buffers are exposed as dma-buf file descriptors. See
+ * Documentation/gpu/drm-mm.rst, section "PRIME Buffer Sharing".
+ */
#define DRM_CAP_PRIME 0x5
+/**
+ * DRM_PRIME_CAP_IMPORT
+ *
+ * If this bit is set in &DRM_CAP_PRIME, the driver supports importing PRIME
+ * buffers via the &DRM_IOCTL_PRIME_FD_TO_HANDLE ioctl.
+ */
#define DRM_PRIME_CAP_IMPORT 0x1
+/**
+ * DRM_PRIME_CAP_EXPORT
+ *
+ * If this bit is set in &DRM_CAP_PRIME, the driver supports exporting PRIME
+ * buffers via the &DRM_IOCTL_PRIME_HANDLE_TO_FD ioctl.
+ */
#define DRM_PRIME_CAP_EXPORT 0x2
+/**
+ * DRM_CAP_TIMESTAMP_MONOTONIC
+ *
+ * If set to 0, the kernel will report timestamps with ``CLOCK_REALTIME`` in
+ * struct drm_event_vblank. If set to 1, the kernel will report timestamps with
+ * ``CLOCK_MONOTONIC``. See ``clock_gettime(2)`` for the definition of these
+ * clocks.
+ *
+ * Starting from kernel version 2.6.39, the default value for this capability
+ * is 1. Starting kernel version 4.15, this capability is always set to 1.
+ */
#define DRM_CAP_TIMESTAMP_MONOTONIC 0x6
+/**
+ * DRM_CAP_ASYNC_PAGE_FLIP
+ *
+ * If set to 1, the driver supports &DRM_MODE_PAGE_FLIP_ASYNC.
+ */
#define DRM_CAP_ASYNC_PAGE_FLIP 0x7
-/*
- * The CURSOR_WIDTH and CURSOR_HEIGHT capabilities return a valid widthxheight
- * combination for the hardware cursor. The intention is that a hardware
- * agnostic userspace can query a cursor plane size to use.
+/**
+ * DRM_CAP_CURSOR_WIDTH
+ *
+ * The ``CURSOR_WIDTH`` and ``CURSOR_HEIGHT`` capabilities return a valid
+ * width x height combination for the hardware cursor. The intention is that a
+ * hardware agnostic userspace can query a cursor plane size to use.
*
* Note that the cross-driver contract is to merely return a valid size;
* drivers are free to attach another meaning on top, eg. i915 returns the
* maximum plane size.
*/
#define DRM_CAP_CURSOR_WIDTH 0x8
+/**
+ * DRM_CAP_CURSOR_HEIGHT
+ *
+ * See &DRM_CAP_CURSOR_WIDTH.
+ */
#define DRM_CAP_CURSOR_HEIGHT 0x9
+/**
+ * DRM_CAP_ADDFB2_MODIFIERS
+ *
+ * If set to 1, the driver supports supplying modifiers in the
+ * &DRM_IOCTL_MODE_ADDFB2 ioctl.
+ */
#define DRM_CAP_ADDFB2_MODIFIERS 0x10
+/**
+ * DRM_CAP_PAGE_FLIP_TARGET
+ *
+ * If set to 1, the driver supports the &DRM_MODE_PAGE_FLIP_TARGET_ABSOLUTE and
+ * &DRM_MODE_PAGE_FLIP_TARGET_RELATIVE flags in
+ * &drm_mode_crtc_page_flip_target.flags for the &DRM_IOCTL_MODE_PAGE_FLIP
+ * ioctl.
+ */
#define DRM_CAP_PAGE_FLIP_TARGET 0x11
+/**
+ * DRM_CAP_CRTC_IN_VBLANK_EVENT
+ *
+ * If set to 1, the kernel supports reporting the CRTC ID in
+ * &drm_event_vblank.crtc_id for the &DRM_EVENT_VBLANK and
+ * &DRM_EVENT_FLIP_COMPLETE events.
+ *
+ * Starting kernel version 4.12, this capability is always set to 1.
+ */
#define DRM_CAP_CRTC_IN_VBLANK_EVENT 0x12
+/**
+ * DRM_CAP_SYNCOBJ
+ *
+ * If set to 1, the driver supports sync objects. See
+ * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
+ */
#define DRM_CAP_SYNCOBJ 0x13
+/**
+ * DRM_CAP_SYNCOBJ_TIMELINE
+ *
+ * If set to 1, the driver supports timeline operations on sync objects. See
+ * Documentation/gpu/drm-mm.rst, section "DRM Sync Objects".
+ */
#define DRM_CAP_SYNCOBJ_TIMELINE 0x14
/* DRM_IOCTL_GET_CAP ioctl argument type */