This exercises a race in the flink name descruction of the current drm
gem core. When racing a gem close with a gem open the open can sneak
in and cause the kernel to leak the flink name and its reference.
This results in leaked gem objects that won't get reaped even at drm
file close time. On my 2 core/4 threads snb machine this leaks on the
order of 1k objects per second.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This exercise the bug fixed in
commit 94a335dba34ff47cad3d6d0c29b452d43a1be3c8
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Wed Jul 17 14:51:28 2013 +0200
drm/i915: correctly restore fences with objects attached
For fun I've also added a subtest for the inverse transition.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Extend the simply functionality by repeating it under the rude
interrupter and chain together a couple of steps in new test cases.
As a compromise for adding more tests, incorporate the piglit subtest
framework.
the reloc, pread, pwrite slow path will be prevented by prefault,
these subtests will disable prefault first, then do reloc, pread,
pwrite, finally enable prefault.
Signed-off-by: Xiong Zhang <xiong.y.zhang@intel.com>
We tweak the tests marked as runnable in simulation to run more quickly,
more often then not at the expense of stress testing (which is of an
arguable interest for the initial bring up in simulation). Hopefully the
values chosen still test something, which is not always straightforward.
It does run quickly, the number on an IVB machines are:
$ time sudo IGT_SIMULATION=0 ./piglit-run.py tests/igt.tests foo
[...]
real 2m0.141s
user 0m16.365s
sys 1m33.382s
Vs.
$ time sudo IGT_SIMULATION=1 ./piglit-run.py tests/igt.tests foo
[...]
real 0m0.448s
user 0m0.226s
sys 0m0.183s
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Let's start by a small set of tests, to eventually consider running
more.
The current list should then be:
gem_mmap
gem_pread_after_blit
gem_ring_sync_loop
gem_ctx_basic
gem_pipe_control_store_loop
gem_storedw_loop_render
gem_storedw_loop_blt
gem_storedw_loop_bsd
gem_render_linear_blits
gem_tiled_blits
gem_cpu_reloc
gem_exec_nop
gem_mmap_gtt
v2 add (Daniel Vetter)
gem_exec_bad_domains
gem_exec_faulting_reloc
gem_flink
gem_reg_read
gem_reloc_overflow
gem_tiling_max_stride
prime_*
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
It's more accurate this way as the quick mode is really useful for in
the simulation environment.
v2: Use the INTEL_ prefix to have a chance to share the same environment
variable as piglit OpenGL tests
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
We've somewhat recently added RGB30 support to testdisplay, but we need
cairo 1.12.0 for that. Barf early.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We want prime to only ever create one native gem object for each
dma-buf it sees. This can e.g. happen if multiple processes import the
same foreign dma-buf, e.g. the application imports a yuv frame from
v4l to encode it into a video stream and the compositor imports it
into his fd again to display it with an overlay.
Hence add a bunch of tests which check all the various orders in which
this could happen. Currently they all fail.
Checking flink names is the easiest (and afaik only) way to check
whether we're indeed dealing with the same object.
This checks both ways, i.e. exporting from i915 and from nouveau, each
with two variants of the test: One reuses the prime fd, the other
closes it and creates a new one.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It's purely accidental that importing that same bo to different
drm nouveau fds yields the same handle.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
They simply take forever with the current kernel implementation. And
since everyone switched over to the event based interface I don't see
much incentive to try to fix that.
So just disable them.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The intel_reg_dumper tool reads a lot of display registers. If we
don't turn on the power well, dmesg will get flooded with tons of
messages about unclaimed registers. So here we enable the "Debug"
power well register and then restore its state later. It's impossible
to guarantee that other things won't mess with the debug register
between our put and get calls, but at least we're trying our best to
keep things working fine, and it's the debug register anyway.
As far as I know, nothing else uses the Debug register for anything,
so we should be safe for now.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>