[ltt-dev] lttv unable to execute textDump on MIPS multicore target

Naresh Bhat nareshgbhat at gmail.com
Mon Feb 14 03:48:33 EST 2011


Hi Mathieu,

I have backported the patch and now TSC is enabled in my kernel and also I
am able to boot the kernel.  But one thing I have observed is between single
core and multiple core boot logs is that  as following

*On single core MIPS:*
*checking TSC synchronization across all online CPUs: PASSED *

*But on multiple core MIPS:*
* Measured 553605851 cycles TSC offset between CPUs, turning off TSC clock.*

I am trying to understand why TSC clock is turned OFF and  Now my question
is that is there any patch that should I need to pick up to fix this issue ?

*
Error Logs:
*........................
.........................................
..........................................................
Kernel command line:  bootoctlinux 0x21000000 numcores=16 root=/dev/nfs rw
nfsroot=<host IP>:/image-cavium-octeonplus
PID hash table entries: 2048 (order: 2, 16384 bytes)
Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
Primary instruction cache 32kB, virtually tagged, 4 way, 64 sets, linesize
128 bytes.
Primary data cache 16kB, 64-way, 2 sets, linesize 128 bytes.
Memory: 498944k/534976k available (4876k kernel code, 35200k reserved, 4937k
data, 272k init, 0k highmem)
Experimental preemptable hierarchical RCU implementation.
NR_IRQS:408
Calibrating delay loop (skipped) preset value.. 1600.00 BogoMIPS
(lpj=8000000)
Security Framework initialized
Mount-cache hash table entries: 256
Checking for the daddi bug... no.
SMP: Booting CPU01 (CoreId  1)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU02 (CoreId  2)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU03 (CoreId  3)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU04 (CoreId  4)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU05 (CoreId  5)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU06 (CoreId  6)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU07 (CoreId  7)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU08 (CoreId  8)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU09 (CoreId  9)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU10 (CoreId 10)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU11 (CoreId 11)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU12 (CoreId 12)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU13 (CoreId 13)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU14 (CoreId 14)...
CPU revision is: 000d0301 (Cavium Octeon+)
SMP: Booting CPU15 (CoreId 15)...
CPU revision is: 000d0301 (Cavium Octeon+)
Brought up 16 CPUs
*checking TSC synchronization across all online CPUs:
Measured 553605851 cycles TSC offset between CPUs, turning off TSC clock.*
NET: Registered protocol family 16
Not in host mode, PCI Controller not initialized
bio: create slab <bio-0> at 0
vgaarb: loaded
SCSI subsystem initialized
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Switching to clocksource OCTEON_CVMCOUNT
NET: Registered protocol family 2
IP route cache hash table entries: 4096 (order: 3, 32768 bytes)
TCP established hash table entries: 16384 (order: 6, 262144 bytes)
TCP bind hash table entries: 16384 (order: 6, 262144 bytes)
TCP: Hash tables configured (established 16384 bind 16384)
TCP reno registered
NET: Registered protocol family 1
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
/proc/octeon_perf: Octeon performace counter interface loaded
init_vdso successfull
HugeTLB registered 2 MB page size, pre-allocated 0 pages
JFFS2 version 2.2. (NAND) �© 2001-2006 Red Hat, Inc.
msgmni has been set to 976
alg: No test for stdrng (krng)
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
LTT : ltt-relay init
Serial: 8250/16550 driver, 2 ports, IRQ sharing disabled
serial8250.0: ttyS0 at MMIO 0x1180000000800 (irq = 58) is a OCTEON
console [ttyS0] enabled, bootconsole disabled
console [ttyS0] enabled, bootconsole disabled
serial8250.0: ttyS1 at MMIO 0x1180000000c00 (irq = 59) is a OCTEON
brd: module loaded
loop: module loaded
pata_octeon_cf pata_octeon_cf: version 2.1 8 bit.
...................................
.....................................
..............................................






On Fri, Feb 11, 2011 at 7:56 PM, Mathieu Desnoyers <
compudj at krystal.dyndns.org> wrote:

> * Naresh Bhat (nareshgbhat at gmail.com) wrote:
> > Hi Mathew,
> >
> > I am using  0.188 patch set version (2.6.32.4-lttng-0.188)
>
> Hi Naresh,
>
> In the more recent LTTng versions, we have a patch to support Octeon.
> It's a patch called
> "lttng-mips-use-64-bit-counter-for-trace-clock-on-octeon-cpus.patch" in
> the 0.241 lttng patchset for kernel 2.6.37. You might want to either
> upgrade your kernel/lttng or try to apply this patch to your older
> kernel if you have version requirements (it might work or not, and you
> may have to tweak a few details).
>
> Good luck!
>
> Mathieu
>
> > *
> > *
> > *Thanks*
> > *-Naresh Bhat*
> >
> > On Fri, Feb 11, 2011 at 12:37 PM, Mathieu Desnoyers <
> > compudj at krystal.dyndns.org> wrote:
> >
> > > * Naresh Bhat (nareshgbhat at gmail.com) wrote:
> > > > Hi All,
> > > >
> > > > I am trying to execute LTTng test on MIPS multicore (I pass
> *numcores=16
> > > > while booting)* Octeon based 58xx target board, but the lttv text
> dump
> > > fails
> > > > to dump
> > > >
> > > > *My Kernel version: 2.6.32*
> > >
> > > Hello,
> > >
> > > Which version of the LTTng patchset are you using ?
> > >
> > > Mathieu
> > >
> > > > *
> > > >
> > > > Linux Trace Toolkit Trace Control 0.78-04122009
> > > >
> > > > Linux Trace Toolkit Visualizer 0.12.29-02022010
> > > >
> > > > *
> > > > My questions are
> > > >
> > > > 1. Anybody faced this kind of problem ?
> > > > 2. Is there any patch available for this ? please, can you provide me
> a
> > > > pointer to that patch ?
> > > >
> > > > *-- Thanks and Regards
> > > > Naresh Bhat*
> > > >
> > > > *Error log messages:*
> > > >
> > > > root at cavium-octeonplus:~# mkdir /mnt/debugfs
> > > >
> > > > root at cavium-octeonplus:~# mount -t debugfs debugfs /mnt/debugfs/
> > > > root at cavium-octeonplus:~# ltt-armall
> > > > Connecting /mnt/debugfs/ltt/markers/pm/idle_entry
> > > > Connecting /mnt/debugfs/ltt/markers/pm/idle_exit
> > > > Connecting /mnt/debugfs/ltt/markers/pm/suspend_entry
> > > > Connecting /mnt/debugfs/ltt/markers/pm/suspend_exit
> > > > Connecting /mnt/debugfs/ltt/markers/ipc/msg_create
> > > > Connecting /mnt/debugfs/ltt/markers/ipc/sem_create
> > > > Connecting /mnt/debugfs/ltt/markers/ipc/shm_create
> > > > Connecting /mnt/debugfs/ltt/markers/ipc/call
> > > > Connecting /mnt/debugfs/ltt/markers/fs/buffer_wait_start
> > > > Connecting /mnt/debugfs/ltt/markers/fs/buffer_wait_end
> > > > Connecting /mnt/debugfs/ltt/markers/fs/exec
> > > > Connecting /mnt/debugfs/ltt/markers/fs/ioctl
> > > > Connecting /mnt/debugfs/ltt/markers/fs/open
> > > > Connecting /mnt/debugfs/ltt/markers/fs/close
> > > > Connecting /mnt/debugfs/ltt/markers/fs/lseek
> > > > Connecting /mnt/debugfs/ltt/markers/fs/llseek
> > > > Connecting /mnt/debugfs/ltt/markers/fs/read
> > > > Connecting /mnt/debugfs/ltt/markers/fs/write
> > > > Connecting /mnt/debugfs/ltt/markers/fs/pread64
> > > > Connecting /mnt/debugfs/ltt/markers/fs/pwrite64
> > > > Connecting /mnt/debugfs/ltt/markers/fs/readv
> > > > Connecting /mnt/debugfs/ltt/markers/fs/writev
> > > > Connecting /mnt/debugfs/ltt/markers/fs/select
> > > > Connecting /mnt/debugfs/ltt/markers/fs/pollfd
> > > > Connecting /mnt/debugfs/ltt/markers/mm/wait_on_page_start
> > > > Connecting /mnt/debugfs/ltt/markers/mm/wait_on_page_end
> > > > Connecting /mnt/debugfs/ltt/markers/mm/huge_page_free
> > > > Connecting /mnt/debugfs/ltt/markers/mm/huge_page_alloc
> > > > Connecting /mnt/debugfs/ltt/markers/mm/page_free
> > > > Connecting /mnt/debugfs/ltt/markers/mm/page_alloc
> > > > Connecting /mnt/debugfs/ltt/markers/mm/swap_in
> > > > Connecting /mnt/debugfs/ltt/markers/mm/swap_out
> > > > Connecting /mnt/debugfs/ltt/markers/mm/swap_file_close
> > > > Connecting /mnt/debugfs/ltt/markers/mm/swap_file_open
> > > > Connecting /mnt/debugfs/ltt/markers/mm/add_to_page_cache
> > > > Connecting /mnt/debugfs/ltt/markers/mm/remove_from_page_cache
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/trap_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/trap_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/syscall_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/syscall_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/irq_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/irq_next_handler
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/irq_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/softirq_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/softirq_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/softirq_raise
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/tasklet_low_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/tasklet_low_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/tasklet_high_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/tasklet_high_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/kthread_stop
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/kthread_stop_ret
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/sched_wait_task
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/sched_try_wakeup
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/sched_wakeup_new_task
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/sched_schedule
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/sched_migrate_task
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/send_signal
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/process_free
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/process_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/process_wait
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/process_fork
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/kthread_create
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/timer_itimer_expired
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/timer_itimer_set
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/timer_set
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/timer_update_time
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/timer_timeout
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/printk
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/vprintk
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/module_free
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/module_load
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/panic
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/kernel_kexec
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/crash_kexec
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/page_fault_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/page_fault_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/page_fault_nosem_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/page_fault_nosem_exit
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/page_fault_get_user_entry
> > > > Connecting /mnt/debugfs/ltt/markers/kernel/page_fault_get_user_exit
> > > > Connecting /mnt/debugfs/ltt/markers/net/dev_xmit
> > > > Connecting /mnt/debugfs/ltt/markers/net/dev_receive
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_create
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_bind
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_connect
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_listen
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_accept
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_getsockname
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_getpeername
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_socketpair
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_sendmsg
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_recvmsg
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_setsockopt
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_getsockopt
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_shutdown
> > > > Connecting /mnt/debugfs/ltt/markers/net/socket_call
> > > > Connecting /mnt/debugfs/ltt/markers/net/tcpv4_rcv
> > > > Connecting /mnt/debugfs/ltt/markers/net/udpv4_rcv
> > > > Connecting /mnt/debugfs/ltt/markers/net/napi_schedule
> > > > Connecting /mnt/debugfs/ltt/markers/net/napi_poll
> > > > Connecting /mnt/debugfs/ltt/markers/net/napi_complete
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_abort_pc
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_abort_fs
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_insert_pc
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_insert_fs
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_issue_pc
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_issue_fs
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_requeue_pc
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_requeue_fs
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_complete_pc
> > > > Connecting /mnt/debugfs/ltt/markers/block/rq_complete_fs
> > > > Connecting /mnt/debugfs/ltt/markers/block/bio_bounce
> > > > Connecting /mnt/debugfs/ltt/markers/block/bio_complete
> > > > Connecting /mnt/debugfs/ltt/markers/block/bio_backmerge
> > > > Connecting /mnt/debugfs/ltt/markers/block/bio_frontmerge
> > > > Connecting /mnt/debugfs/ltt/markers/block/bio_queue
> > > > Connecting /mnt/debugfs/ltt/markers/block/getrq_bio
> > > > Connecting /mnt/debugfs/ltt/markers/block/getrq
> > > > Connecting /mnt/debugfs/ltt/markers/block/sleeprq_bio
> > > > Connecting /mnt/debugfs/ltt/markers/block/sleeprq
> > > > Connecting /mnt/debugfs/ltt/markers/block/plug
> > > > Connecting /mnt/debugfs/ltt/markers/block/unplug_io
> > > > Connecting /mnt/debugfs/ltt/markers/block/unplug_timer
> > > > Connecting /mnt/debugfs/ltt/markers/block/split
> > > > Connecting /mnt/debugfs/ltt/markers/block/remap
> > > > Connecting /mnt/debugfs/ltt/markers/userspace/event
> > > > Connecting /mnt/debugfs/ltt/markers/netif_state/insert_ifa_ipv4
> > > > Connecting /mnt/debugfs/ltt/markers/netif_state/del_ifa_ipv4
> > > > Connecting /mnt/debugfs/ltt/markers/netif_state/insert_ifa_ipv6
> > > > Connecting
> /mnt/debugfs/ltt/markers/netif_state/network_ipv4_interface
> > > > Connecting /mnt/debugfs/ltt/markers/netif_state/network_ip_interface
> > > > Connecting /mnt/debugfs/ltt/markers/irq_state/interrupt
> > > > Connecting /mnt/debugfs/ltt/markers/vm_state/vm_map
> > > > Connecting /mnt/debugfs/ltt/markers/fd_state/file_descriptor
> > > > Connecting /mnt/debugfs/ltt/markers/task_state/process_state
> > > > Connecting /mnt/debugfs/ltt/markers/global_state/statedump_end
> > > > Connecting /mnt/debugfs/ltt/markers/input/input_event
> > > > Connecting /mnt/debugfs/ltt/markers/swap_state/statedump_swap_files
> > > > Connecting /mnt/debugfs/ltt/markers/module_state/list_module
> > > > Connecting /mnt/debugfs/ltt/markers/softirq_state/softirq_vec
> > > > root at cavium-octeonplus:~# lttctl -C -w /tmp/trace1 trace1
> > > > Linux Trace Toolkit Trace Control 0.78-04122009
> > > >
> > > > Controlling trace : trace1
> > > >
> > > > lttctl: Creating trace
> > > > lttctl: Forking lttd
> > > > Linux Trace Toolkit Trace Daemon 0.78-04122009
> > > >
> > > > Reading from debugfs directory : /mnt/debugfs/ltt/trace1
> > > > Writing to trace directory : /tmp/trace1
> > > >
> > > > lttctl: Starting trace
> > > > root at cavium-octeonplus:~# lttctl -D trace1
> > > > Linux Trace Toolkit Trace Control 0.78-04122009
> > > >
> > > > Controlling trace : trace1
> > > >
> > > > lttctl: Pausing trace
> > > > lttctl: Destroying trace
> > > > LTT: 32 events written in channel softirq_state (cpu 0, index 0)
> > > > LTT: 17 events written in channel module_state (cpu 0, index 0)
> > > > LTT: 1 events written in channel global_state (cpu 0, index 0)
> > > > LTT: 335 events written in channel task_state (cpu 0, index 0)
> > > > LTT: 699 events written in channel fd_state (cpu 0, index 0)
> > > > LTT: 429 events written in channel vm_state (cpu 0, index 0)
> > > > LTT: 7 events written in channel irq_state (cpu 0, index 0)
> > > > LTT: 10 events written in channel netif_state (cpu 0, index 0)
> > > > LTT: 218 events written in channel net (cpu 0, index 0)
> > > > LTT: 3 events written in channel net (cpu 1, index 0)
> > > > LTT: 4 events written in channel net (cpu 2, index 0)
> > > > LTT: 2881 events written in channel kernel (cpu 0, index 0)
> > > > LTT: 663 events written in channel kernel (cpu 1, index 0)
> > > > LTT: 731 events written in channel kernel (cpu 2, index 0)
> > > > LTT: 88 events written in channel kernel (cpu 3, index 0)
> > > > LTT: 77 events written in channel kernel (cpu 4, index 0)
> > > > LTT: 89 events written in channel kernel (cpu 5, index 0)
> > > > LTT: 98 events written in channel kernel (cpu 6, index 0)
> > > > LTT: 78 events written in channel kernel (cpu 7, index 0)
> > > > LTT: 208 events written in channel kernel (cpu 8, index 0)
> > > > LTT: 75 events written in channel kernel (cpu 9, index 0)
> > > > LTT: 75 events written in channel kernel (cpu 10, index 0)
> > > > LTT: 88 events written in channel kernel (cpu 11, index 0)
> > > > LTT: 93 events written in channel kernel (cpu 12, index 0)
> > > > LTT: 95 events written in channel kernel (cpu 13, index 0)
> > > > LTT: 62 events written in channel kernel (cpu 14, index 0)
> > > > LTT: 64 events written in channel kernel (cpu 15, index 0)
> > > > LTT: 44 events written in channel mm (cpu 0, index 0)
> > > > LTT: 14 events written in channel mm (cpu 1, index 0)
> > > > LTT: 41 events written in channel mm (cpu 2, index 0)
> > > > LTT: 1 events written in channel mm (cpu 8, index 0)
> > > > LTT: 253 events written in channel fs (cpu 0, index 0)
> > > > LTT: 4 events written in channel fs (cpu 1, index 0)
> > > > LTT: 31 events written in channel fs (cpu 2, index 0)
> > > > LTT: 272 events written in channel fs (cpu 4, index 0)
> > > > LTT: 8 events written in channel fs (cpu 8, index 0)
> > > > LTT: 286 events written in channel metadata (cpu 0, index 0)
> > > > root at cavium-octeonplus:~# lttv -m textDump -t /tmp/trace1/ >
> > > > Log-trace-text-format.txt
> > > >
> > > > ** ERROR **: Process 2120 has been created at [0.000000000] and
> inserted
> > > at
> > > > [92.895699903] before
> > > > fork on cpu 0[94.290545440].
> > > > Probably an unsynchronized TSC problem on the traced machine.
> > > > aborting...
> > > > Aborted
> > > > root at cavium-octeonplus:~#
> > >
> > > --
> > > Mathieu Desnoyers
> > > Operating System Efficiency R&D Consultant
> > > EfficiOS Inc.
> > > http://www.efficios.com
> > >
>
> --
> Mathieu Desnoyers
> Operating System Efficiency R&D Consultant
> EfficiOS Inc.
> http://www.efficios.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.casi.polymtl.ca/pipermail/lttng-dev/attachments/20110214/a2f19b39/attachment-0003.htm>


More information about the lttng-dev mailing list