Status Update
Comments
sw...@google.com <sw...@google.com> #2
We should report this problem upstream. Having DMA debugging enabled is certainly causing this lockdep splat to trigger. It looks like tmc_sync_etr_buf()
holds the drvdata->spinlock
and then calls into DMA APIs, which is not good. It would be better to somehow call DMA APIs without holding a spinlock like that.
de...@google.com <de...@google.com> #3
Reported upstream:
Is this issue easily reproducible if you run etm profiling manually with perf record -e cs_etm/autofdo/uk -a
?
And did you have any other config changes on top of enabled debug info?
sw...@google.com <sw...@google.com> #4
Is this issue easily reproducible if you run etm profiling manually
That command doesn't cause the lockdep splat if I run it at the login screen. Instead I get this DMA debug error
[ 43.368020] DMA-API: coresight-tmc 6048000.etr: device driver failed to check map error[device address=0x0000000fffc00000] [size=4194304 bytes] [mapped as single]
[ 43.368039] WARNING: CPU: 6 PID: 2961 at kernel/dma/debug.c:1047 check_unmap+0xd80/0x13c0
[ 43.368060] Modules linked in: 8021q lzo_rle lzo_compress zram zsmalloc uinput veth uvcvideo uvc rfcomm algif_hash algif_skcipher venus_dec venus_enc af_alg cros_ec_typec qcom_spmi_adc5 qcom_vadc_common qcom_spmi_temp_alarm roles snd_soc_rt5682_i2c snd_soc_rt5682 snd_soc_sc7180 snd_soc_rl6231 qcom_stats snd_soc_qcom_common hci_uart btqca ip6table_nat venus_core ath10k_snoc ath10k_core icc_bwmon ath coresight_etm4x coresight_funnel coresight_tmc coresight_replicator snd_soc_lpass_sc7180 snd_soc_lpass_hdmi snd_soc_lpass_cpu snd_soc_lpass_platform coresight snd_soc_max98357a xt_MASQUERADE xt_cgroup fuse mac80211 iio_trig_sysfs bluetooth cros_ec_sensors cros_ec_lid_angle ecdh_generic cros_ec_sensors_core ecc industrialio_triggered_buffer kfifo_buf cros_ec_sensorhub cfg80211 joydev
[ 43.368213] CPU: 6 PID: 2961 Comm: kworker/6:3 Not tainted 6.6.21-lockdep-01303-g479a6ef70fae #1 7e7c3a95f01f5b2f78f949dd2505f69e90c5f98d
[ 43.368223] Hardware name: Google Lazor (rev3 - 8) with KB Backlight (DT)
[ 43.368229] Workqueue: events free_event_data [coresight]
[ 43.368254] pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 43.368262] pc : check_unmap+0xd80/0x13c0
[ 43.368270] lr : check_unmap+0xd78/0x13c0
[ 43.368277] sp : ffffffc087297860
[ 43.368282] pmr_save: 00000060
[ 43.368286] x29: ffffffc0872978e0 x28: dfffffc000000000 x27: ffffffc08729796c
[ 43.368299] x26: ffffff8080fd2dac x25: ffffffd77e7461a0 x24: 1ffffff0101fa5b5
[ 43.368310] x23: ffffff8080e10660 x22: 0000000fffc00000 x21: ffffff8080fd2d80
[ 43.368322] x20: ffffffd718f614a0 x19: 0000000000400000 x18: 0000000000000000
[ 43.368333] x17: 0000000000000000 x16: 0000000000000000 x15: 000000000000000a
[ 43.368344] x14: 0000000000000000 x13: 0000000000000001 x12: 0000000000000000
[ 43.368355] x11: 0000000000000003 x10: 0000000000000000 x9 : 2b6f88e1c8ddb100
[ 43.368367] x8 : 2b6f88e1c8ddb100 x7 : 0000000000000001 x6 : 0000000000000001
[ 43.368378] x5 : ffffffc087297528 x4 : ffffffd77f0f6320 x3 : ffffffd77ca7d510
[ 43.368389] x2 : 0000000000000001 x1 : 0000000000000004 x0 : ffffffd77e746e80
[ 43.368400] Call trace:
[ 43.368404] check_unmap+0xd80/0x13c0
[ 43.368413] debug_dma_unmap_page+0xe8/0x140
[ 43.368421] dma_free_pages+0x44/0x80
[ 43.368429] tmc_etr_free_flat_buf+0x140/0x170 [coresight_tmc 1f6e34c254c5bd8ef086dfe945ba2aa93e0bcfe5]
[ 43.368445] tmc_free_etr_buf+0xa8/0xe8 [coresight_tmc 1f6e34c254c5bd8ef086dfe945ba2aa93e0bcfe5]
[ 43.368458] tmc_free_etr_buffer+0xe4/0x188 [coresight_tmc 1f6e34c254c5bd8ef086dfe945ba2aa93e0bcfe5]
[ 43.368471] free_event_data+0x148/0x2b8 [coresight 652d5f1a302b25638601de5503f8866cafd8f16f]
[ 43.368485] process_scheduled_works+0x510/0xd60
[ 43.368494] worker_thread+0x6f0/0xa80
[ 43.368501] kthread+0x23c/0x358
[ 43.368507] ret_from_fork+0x10/0x20
[ 43.368515] ---[ end trace 0000000000000000 ]---
[ 43.368521] DMA-API: Mapped at:
[ 43.368525] debug_dma_map_page+0x78/0x290
[ 43.368533] dma_alloc_pages+0x78/0xa0
[ 43.368540] tmc_etr_alloc_flat_buf+0xf4/0x298 [coresight_tmc]
[ 43.368553] tmc_alloc_etr_buf+0x138/0x2d8 [coresight_tmc]
[ 43.368565] alloc_etr_buf+0x8c/0x148 [coresight_tmc]
I also tried logging in and running the command and still nothing. Probably because the chain entry #2 is from kswapd?
sw...@google.com <sw...@google.com> #5
Oh I forgot to enable lockdep with USE="debug lockdebug"
on my emerge command. I did that but still the lockdep report doesn't reproduce.
sw...@google.com <sw...@google.com> #6
Use this attachment for the config.
sw...@google.com <sw...@google.com> #7
I ran that command, logged in and started an android app and hit a similar lockdep report
[ 495.381641] 6.6.21-lockdep-01303-g479a6ef70fae #1 Tainted: G W |
[ 495.381665] ------------------------------------------------------ |
[ 495.381683] virtio_blk/6627 is trying to acquire lock: |
[ 495.381705] ffffffd7a9b5ed90 (&pgdat->kswapd_wait){....}-{2:2}, at: __wake_up_common_lock+0xdc/0x190 |
[ 495.381792] |
but task is already holding lock: |
[ 495.381811] ffffffd7a9518bd8 (radix_lock){-.-.}-{2:2}, at: add_dma_entry+0x184/0x478 |
[ 495.381878] |
which lock already depends on the new lock. |
|
[ 495.381896] |
the existing dependency chain (in reverse order) is: |
[ 495.381914] |
-> #6 (radix_lock){-.-.}-{2:2}: |
[ 495.381967] _raw_spin_lock_irqsave+0x74/0xf8 |
[ 495.382002] check_unmap+0xed4/0x13c0 |
[ 495.382030] debug_dma_free_coherent+0x22c/0x288 |
[ 495.382057] dma_free_attrs+0xc8/0x148 |
[ 495.382083] qcom_scm_assign_mem+0x578/0x680 |
[ 495.382113] qcom_rmtfs_mem_probe+0x54c/0x7e8 |
[ 495.382141] platform_probe+0x124/0x168 |
[ 495.382170] really_probe+0x240/0x708 |
[ 495.382196] __driver_probe_device+0x164/0x340 |
[ 495.382220] driver_probe_device+0x6c/0x150 |
[ 495.382245] __driver_attach+0x174/0x298 |
[ 495.382270] bus_for_each_dev+0xd8/0x138 |
[ 495.382298] driver_attach+0x50/0x68 |
[ 495.382323] bus_add_driver+0x1e8/0x3f0 |
[ 495.382350] driver_register+0x164/0x2b0 |
[ 495.382375] __platform_driver_register+0x78/0x98 |
[ 495.382401] qcom_rmtfs_mem_init+0x5c/0xc8 |
[ 495.382433] do_one_initcall+0x198/0x780 |
[ 495.382461] do_initcall_level+0x118/0x158 |
[ 495.382492] do_initcalls+0x5c/0xa8 |
[ 495.382519] do_basic_setup+0x80/0xa0 |
[ 495.382546] kernel_init_freeable+0x294/0x400 |
[ 495.382574] kernel_init+0x28/0x140 |
[ 495.382603] ret_from_fork+0x10/0x20 |
[ 495.382631] |
-> #5 (&dma_entry_hash[i].lock){-.-.}-{2:2}: |
[ 495.382690] _raw_spin_lock_irqsave+0x74/0xf8 |
[ 495.382721] check_sync+0x78/0xdb0 |
[ 495.382749] debug_dma_sync_single_for_cpu+0x120/0x1a8 |
[ 495.382776] dma_sync_single_for_cpu+0x1f8/0x250 |
[ 495.382801] tmc_etr_sync_flat_buf+0x174/0x1b8 [coresight_tmc] |
[ 495.382885] tmc_sync_etr_buf+0x210/0x370 [coresight_tmc] |
[ 495.382951] tmc_update_etr_buffer+0x164/0x858 [coresight_tmc] |
[ 495.383016] etm_event_stop+0x33c/0x440 [coresight] |
[ 495.383147] etm_event_del+0x1c/0x30 [coresight] |
[ 495.383244] event_sched_out+0x24c/0x6e8 |
[ 495.383278] group_sched_out+0xfc/0x468 |
[ 495.383305] __perf_event_disable+0xe8/0x170 |
[ 495.383331] event_function+0x2ac/0x460 |
[ 495.383358] remote_function+0xc0/0x160 |
[ 495.383385] generic_exec_single+0x128/0x710 |
[ 495.383414] smp_call_function_single+0x178/0x3c0 |
[ 495.383440] event_function_call+0x1e8/0x330 |
[ 495.383467] _perf_event_disable+0x7c/0xc0 |
[ 495.383492] perf_event_for_each_child+0x84/0x100 |
[ 495.383518] perf_ioctl+0xe4/0x15f8 |
[ 495.383543] __arm64_sys_ioctl+0x118/0x160 |
[ 495.383570] invoke_syscall+0x78/0x218 |
[ 495.383596] el0_svc_common+0x118/0x1e8 |
[ 495.383621] do_el0_svc+0x9c/0xc8 |
[ 495.383645] el0_svc+0x50/0xd8 |
[ 495.383672] el0t_64_sync_handler+0x44/0xf0 |
[ 495.383697] el0t_64_sync+0x1a8/0x1b0 |
[ 495.383724] |
-> #4 (&drvdata->spinlock){....}-{2:2}: |
[ 495.383785] _raw_spin_lock_irqsave+0x74/0xf8 |
[ 495.383818] tmc_update_etr_buffer+0xbc/0x858 [coresight_tmc] |
[ 495.383900] etm_event_stop+0x33c/0x440 [coresight] |
[ 495.384004] etm_event_del+0x1c/0x30 [coresight] |
[ 495.384101] event_sched_out+0x24c/0x6e8 |
[ 495.384135] group_sched_out+0xfc/0x468 |
[ 495.384162] __pmu_ctx_sched_out+0x17c/0x250 |
[ 495.384190] ctx_sched_out+0x214/0x348 |
[ 495.384217] __perf_event_task_sched_in+0x328/0x5b8 |
[ 495.384244] finish_task_switch+0x314/0x528 |
[ 495.384275] __schedule+0x13d8/0x21f8 |
[ 495.384302] preempt_schedule_common+0xdc/0x1e0 |
[ 495.384328] preempt_schedule+0x8c/0x90 |
[ 495.384353] kvm_arch_vcpu_ioctl_run+0xa88/0x15a8 |
[ 495.384381] kvm_vcpu_ioctl+0xd1c/0x1640 |
[ 495.384408] __arm64_sys_ioctl+0x118/0x160 |
[ 495.384434] invoke_syscall+0x78/0x218 |
[ 495.384460] el0_svc_common+0x118/0x1e8 |
[ 495.384484] do_el0_svc+0x9c/0xc8
[ 495.384508] el0_svc+0x50/0xd8
[ 495.384534] el0t_64_sync_handler+0x44/0xf0
[ 495.384559] el0t_64_sync+0x1a8/0x1b0
[ 495.384585]
-> #3 (&ctx->lock){-...}-{2:2}:
[ 495.384646] _raw_spin_lock+0x5c/0xa8
[ 495.384677] __perf_event_task_sched_out+0x324/0xcf8
[ 495.384704] __schedule+0x1fd0/0x21f8
[ 495.384729] schedule+0xd0/0x1b8
[ 495.384753] kvm_vcpu_block+0x114/0x1d0
[ 495.384783] kvm_vcpu_halt+0x160/0x850
[ 495.384810] kvm_vcpu_wfi+0xbc/0x1f8
[ 495.384836] kvm_handle_wfx+0x12c/0x2b8
[ 495.384861] handle_exit+0x114/0x2c0
[ 495.384885] kvm_arch_vcpu_ioctl_run+0xa08/0x15a8
[ 495.384910] kvm_vcpu_ioctl+0xd1c/0x1640
[ 495.384936] __arm64_sys_ioctl+0x118/0x160
[ 495.384961] invoke_syscall+0x78/0x218
[ 495.384985] el0_svc_common+0x118/0x1e8
[ 495.385009] do_el0_svc+0x9c/0xc8
[ 495.385033] el0_svc+0x50/0xd8
[ 495.385057] el0t_64_sync_handler+0x44/0xf0
[ 495.385082] el0t_64_sync+0x1a8/0x1b0
[ 495.385108]
-> #2 (&rq->__lock){-.-.}-{2:2}:
[ 495.385165] _raw_spin_lock_nested+0x60/0xa8
[ 495.385194] raw_spin_rq_lock_nested+0x34/0x58
[ 495.385222] task_fork_fair+0x70/0x1a8
[ 495.385251] sched_cgroup_fork+0x2f4/0x3a8
[ 495.385279] copy_process+0x1ac0/0x28a8
[ 495.385308] kernel_clone+0x12c/0x6d8
[ 495.385335] user_mode_thread+0xd4/0x128
[ 495.385363] rest_init+0x30/0x228
[ 495.385390] arch_call_rest_init+0x1c/0x28
[ 495.385421] start_kernel+0x344/0x408
[ 495.385448] __primary_switched+0xc0/0xd0
[ 495.385476]
-> #1 (&p->pi_lock){-.-.}-{2:2}:
[ 495.385532] _raw_spin_lock_irqsave+0x74/0xf8
[ 495.385562] try_to_wake_up+0x6c/0xeb0
[ 495.385590] default_wake_function+0x60/0x88
[ 495.385617] autoremove_wake_function+0x2c/0x108
[ 495.385647] __wake_up_common+0x228/0x3c0
[ 495.385675] __wake_up_common_lock+0xfc/0x190
[ 495.385702] __wake_up+0x20/0x38
[ 495.385728] wakeup_kswapd+0x274/0x548
[ 495.385758] wake_all_kswapds+0x138/0x248
[ 495.385786] __alloc_pages_slowpath+0x2d8/0x1b78
[ 495.385813] __alloc_pages+0x3fc/0x728
[ 495.385839] __folio_alloc+0x1c/0x30
[ 495.385864] shmem_get_folio_gfp+0x698/0xf38
[ 495.385890] shmem_fault+0x16c/0x5d0
[ 495.385915] __do_fault+0xfc/0x218
[ 495.385943] handle_mm_fault+0x12cc/0x2170
[ 495.385970] __get_user_pages+0x26c/0x598
[ 495.385996] get_user_pages_unlocked+0x208/0x530
[ 495.386022] hva_to_pfn+0x2a4/0xf38
[ 495.386051] kvm_follow_pfn+0x16c/0x208
[ 495.386078] __gfn_to_pfn_memslot+0x184/0x330
[ 495.386105] kvm_handle_guest_abort+0xa64/0x19a8
[ 495.386134] handle_exit+0x114/0x2c0
[ 495.386158] kvm_arch_vcpu_ioctl_run+0xa08/0x15a8
[ 495.386184] kvm_vcpu_ioctl+0xd1c/0x1640
[ 495.386210] __arm64_sys_ioctl+0x118/0x160
[ 495.386237] invoke_syscall+0x78/0x218
[ 495.386262] el0_svc_common+0x118/0x1e8
[ 495.386286] do_el0_svc+0x9c/0xc8
[ 495.386310] el0_svc+0x50/0xd8
[ 495.386336] el0t_64_sync_handler+0x44/0xf0
[ 495.386361] el0t_64_sync+0x1a8/0x1b0
[ 495.386387]
-> #0 (&pgdat->kswapd_wait){....}-{2:2}:
[ 495.386445] __lock_acquire+0x2ae8/0x60b8
[ 495.386476] lock_acquire+0x204/0x6c0
[ 495.386504] _raw_spin_lock_irqsave+0x74/0xf8
[ 495.386532] __wake_up_common_lock+0xdc/0x190
[ 495.386561] __wake_up+0x20/0x38
[ 495.386588] wakeup_kswapd+0x274/0x548
[ 495.386617] get_page_from_freelist+0x2254/0x24f8
[ 495.386645] __alloc_pages+0x2bc/0x728
[ 495.386670] new_slab+0xc4/0x508
[ 495.386699] ___slab_alloc+0x7d8/0xcc8
[ 495.386725] kmem_cache_alloc+0x300/0x428
[ 495.386751] radix_tree_node_alloc+0x154/0x2c8
[ 495.386779] radix_tree_insert+0x124/0x3d8
[ 495.386804] add_dma_entry+0x19c/0x478
[ 495.386833] debug_dma_map_sg+0x610/0x780
[ 495.386859] __dma_map_sg_attrs+0x100/0x170
[ 495.386884] dma_map_sg_attrs+0x18/0x30
[ 495.386908] cqhci_request+0x94c/0x1268
[ 495.386936] mmc_cqe_start_req+0xbc/0x240
[ 495.386965] mmc_blk_mq_issue_rq+0x7cc/0xb18
[ 495.386992] mmc_mq_queue_rq+0x3a4/0x7c8
[ 495.387019] blk_mq_dispatch_rq_list+0x998/0x14e8
[ 495.387049] __blk_mq_sched_dispatch_requests+0xaa0/0xfe8
[ 495.387075] blk_mq_sched_dispatch_requests+0xa8/0xe8
[ 495.387100] blk_mq_run_hw_queue+0x470/0x528
[ 495.387125] blk_mq_flush_plug_list+0x608/0x12c8
[ 495.387152] __blk_flush_plug+0x2fc/0x370
[ 495.387180] blk_finish_plug+0x60/0x90
[ 495.387206] read_pages+0x3d8/0x530
[ 495.387231] page_cache_ra_unbounded+0x3c0/0x4e8
[ 495.387256] do_page_cache_ra+0xc8/0xe8
[ 495.387281] ondemand_readahead+0x550/0x998
[ 495.387306] page_cache_sync_ra+0xec/0x148
[ 495.387330] filemap_get_pages+0x290/0x1130
[ 495.387357] filemap_read+0x298/0x8c0
[ 495.387381] generic_file_read_iter+0x84/0x278
[ 495.387407] ext4_file_read_iter+0x1bc/0x250
[ 495.387434] do_iter_readv_writev+0x214/0x320
[ 495.387462] do_iter_read+0x100/0x300
[ 495.387487] vfs_readv+0xc8/0x138
[ 495.387512] do_preadv+0xcc/0x1a0
[ 495.387537] __arm64_sys_preadv+0xa4/0xc8
[ 495.387562] invoke_syscall+0x78/0x218
[ 495.387589] el0_svc_common+0x118/0x1e8
[ 495.387612] do_el0_svc+0x9c/0xc8
[ 495.387636] el0_svc+0x50/0xd8
[ 495.387661] el0t_64_sync_handler+0x44/0xf0
[ 495.387686] el0t_64_sync+0x1a8/0x1b0
[ 495.387712]
other info that might help us debug this:
[ 495.387731] Chain exists of:
&pgdat->kswapd_wait --> &dma_entry_hash[i].lock --> radix_lock
[ 495.387804] Possible unsafe locking scenario:
[ 495.387821] CPU0 CPU1
[ 495.387839] ---- ----
[ 495.387856] lock(radix_lock);
[ 495.387889] lock(&dma_entry_hash[i].lock);
[ 495.387923] lock(radix_lock);
[ 495.387956] lock(&pgdat->kswapd_wait);
[ 495.387989]
*** DEADLOCK ***
[ 495.388007] 3 locks held by virtio_blk/6627:
[ 495.388029] #0: ffffff8114ecbae8 (mapping.invalidate_lock#4){++++}-{3:3}, at: page_cache_ra_unbounded+0xd8/0x4e8
[ 495.388115] #1: ffffff8080ed3910 (set->srcu){.?.?}-{0:0}, at: srcu_lock_acquire+0xc/0x38
[ 495.388191] #2: ffffffd7a9518bd8 (radix_lock){-.-.}-{2:2}, at: add_dma_entry+0x184/0x478
[ 495.388265]
stack backtrace:
[ 495.388287] CPU: 5 PID: 6627 Comm: virtio_blk Tainted: G W 6.6.21-lockdep-01303-g479a6ef70fae #1 67e8decbdd22d65a000d68bd55a8bb477fc19474
[ 495.388319] Hardware name: Google Lazor (rev3 - 8) with KB Backlight (DT)
[ 495.388340] Call trace:
[ 495.388360] dump_backtrace+0xf8/0x148
[ 495.388388] show_stack+0x20/0x48
[ 495.388413] dump_stack_lvl+0xb4/0xf8
[ 495.388438] dump_stack+0x18/0x40
[ 495.388462] print_circular_bug+0x17c/0x1a8
[ 495.388489] check_noncircular+0x294/0x388
[ 495.388513] __lock_acquire+0x2ae8/0x60b8
[ 495.388542] lock_acquire+0x204/0x6c0
[ 495.388569] _raw_spin_lock_irqsave+0x74/0xf8
[ 495.388599] __wake_up_common_lock+0xdc/0x190
[ 495.388628] __wake_up+0x20/0x38
[ 495.388654] wakeup_kswapd+0x274/0x548
[ 495.388684] get_page_from_freelist+0x2254/0x24f8
[ 495.388713] __alloc_pages+0x2bc/0x728
[ 495.388738] new_slab+0xc4/0x508
[ 495.388766] ___slab_alloc+0x7d8/0xcc8
[ 495.388792] kmem_cache_alloc+0x300/0x428
[ 495.388818] radix_tree_node_alloc+0x154/0x2c8
[ 495.388845] radix_tree_insert+0x124/0x3d8
[ 495.388869] add_dma_entry+0x19c/0x478
[ 495.388897] debug_dma_map_sg+0x610/0x780
[ 495.388923] __dma_map_sg_attrs+0x100/0x170
[ 495.388948] dma_map_sg_attrs+0x18/0x30
[ 495.388972] cqhci_request+0x94c/0x1268
[ 495.388999] mmc_cqe_start_req+0xbc/0x240
[ 495.389028] mmc_blk_mq_issue_rq+0x7cc/0xb18
[ 495.389055] mmc_mq_queue_rq+0x3a4/0x7c8
[ 495.389082] blk_mq_dispatch_rq_list+0x998/0x14e8
[ 495.389110] __blk_mq_sched_dispatch_requests+0xaa0/0xfe8
[ 495.389136] blk_mq_sched_dispatch_requests+0xa8/0xe8
[ 495.389161] blk_mq_run_hw_queue+0x470/0x528
[ 495.389187] blk_mq_flush_plug_list+0x608/0x12c8
[ 495.389213] __blk_flush_plug+0x2fc/0x370
[ 495.389242] blk_finish_plug+0x60/0x90
[ 495.389267] read_pages+0x3d8/0x530
[ 495.389293] page_cache_ra_unbounded+0x3c0/0x4e8
[ 495.389318] do_page_cache_ra+0xc8/0xe8
[ 495.389342] ondemand_readahead+0x550/0x998
[ 495.389366] page_cache_sync_ra+0xec/0x148
[ 495.389390] filemap_get_pages+0x290/0x1130
[ 495.389417] filemap_read+0x298/0x8c0
[ 495.389441] generic_file_read_iter+0x84/0x278
[ 495.389466] ext4_file_read_iter+0x1bc/0x250
[ 495.389493] do_iter_readv_writev+0x214/0x320
[ 495.389521] do_iter_read+0x100/0x300
[ 495.389546] vfs_readv+0xc8/0x138
[ 495.389571] do_preadv+0xcc/0x1a0
[ 495.389596] __arm64_sys_preadv+0xa4/0xc8
[ 495.389621] invoke_syscall+0x78/0x218
[ 495.389648] el0_svc_common+0x118/0x1e8
[ 495.389672] do_el0_svc+0x9c/0xc8
[ 495.389695] el0_svc+0x50/0xd8
[ 495.389721] el0t_64_sync_handler+0x44/0xf0
[ 495.389746] el0t_64_sync+0x1a8/0x1b0
Description
I got a lockdep splat after leaving trogdor.lazor idle and logged in for over an hour with the debug kernel built. Presumably that's when perf runs to collect ETM traces.