This unbreaks kexec support. Without this fix all
cases of kexec fails since __pa() does not behave
like PHYSADDR(). The downside is that we also kill
the code blocking users running old kexec-tools.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
When CONFIG_FUNCTION_GRAPH_TRACER is enabled the function graph tracer
may patch return addresses on the stack with the address of
return_to_handler(). This really confuses the DWARF unwinder because it
will try find the caller of return_to_handler(), not the caller of the
real return address.
So teach the DWARF unwinder how to find the real return address whenever
it encounters return_to_handler().
This patch does not cope very well when multiple return addresses on the
stack have been patched. To make it work properly it would require state
to track how many return_to_handler()'s have been seen so that we'd know
where to look in current->curr_ret_stack[]. So for now, instead of
trying to handle this, just moan if more than one return address on the
stack has been patched.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds an __irq_entry annotation for do_IRQ() so that the IRQ
annotation in the function graph tracer works as advertized. We already
have the IRQENTRY section wired up, so this is just a trivial addition
to actually make use of it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This moves the current dma_alloc/free_coherent() calls to a generic
variant and plugs them in for the nommu default. Other variants can
override the defaults in the dma mapping ops directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
SCIF2 and the FPU exceptions happen to share vector numbers, one in
EXPEVT and the other in INTEVT. This is a violation of the interface and
should have never made it in to silicon. On top of that, the demux hack
that was added for special dispatch is rather error prone, and introduces
more problems than it solves. Kill all of it off, and just refuse to deal
with SCIF2 outright.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This simplifies the irqflags support by switching over to the asm-generic
version. The necessary support functions are brought out-of-line for both
SHcompact and SHmedia instruction sets.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This code was added for some ancient SH-4 solution engines with peculiar
boot ROMs that did silly things to the UBC MSTP bits. None of these have
been in the wild for years, and these days the clock framework wraps up
the MSTP bits, meaning that the UBC code is one of the few interfaces
that is stomping MSTP bits underneath the clock framework. At this point
the risks far outweigh any benefit this code provided, so just kill it
off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This enables SCHED_MC support for SH-X3 multi-cores. Presently this is
just a simple wrapper around the possible map, but this allows for
tying in support for some of the more exotic NUMA clusters where we can
actually do something with the topology.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In the case where need_resched() is set in between the cpu_idle() and
pm_idle() calls we were missing an else case for just re-enabling local
IRQs and bailing out. This was noticed by the irqs_disabled() warning,
even though IRQs were being re-enabled elsewhere.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This follows the x86 change and moves check_pgt_cache() up under the
!need_resched() tight loop, rather than simply calling in to it when
exiting idle.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of chainsawing of the idle loop code to get light sleep
working on SMP. Previously this was forcing secondary CPUs in to sleep
mode with them not coming back if they didn't have their own local
timers. Given that we use clockevents broadcasting by default, the CPU
managing the clockevents can't have IRQs disabled before entering its
sleep state.
This unfortunately leaves us with the age-old need_resched() race in
between local_irq_enable() and cpu_sleep(), but at present this is
unavoidable. After some more experimentation it may be possible to layer
on SR.BL bit manipulation over top of this scheme to inhibit the race
condition, but given the current potential for missing wakeups, this is
left as a future exercise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All of the secondary CPUs are forced in to light sleep mode, but we were
missing the same initialization for the boot CPU. This resulted in
inconsistent sleep modes depending on which CPU we were on, confusing the
idle loop when not polling.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in support for NMI counting per-CPU via irq_cpustat_t.
Modelled after the x86 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Replace TIF_RESTORE_SIGMASK with TS_RESTORE_SIGMASK and define our own
set_restore_sigmask() function. This saves the costly SMP-safe set_bit
operation, which we do not need for the sigmask flag since TIF_SIGPENDING
always has to be set too.
Based on the x86 and powerpc change.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The resume_userspace path had TRACE_IRQS_OFF written incorrectly and so
never handled the transition properly. This was fixed once before but
seems to have made it back in the tree. Fix it for good.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only needs to flush the return code via the legacy path, and just
invalidates uselessly otherwise. This makes the behaviour consistent for
all of the trampoline setup paths.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The secondary CPU info was seeing corrupted results due to not entering
all of the setup paths taken by the boot CPU. So we just memcpy() the
boot cpu data over directly, and then fix up the per-CPU bits.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We do not want to use smp_processor_id() from these paths, as they trip
preempt BUGs. Switch the test over to the boot cpu directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Secondary CPUs already take care of the D-cache bits through the common
cache initialization path, and the only thing that is necessary after
twiddling around with stack_start is ensuring that the I-cache changes
are visible (particularly since this tends to be the only part lacking
coherency).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This cribs the x86 implementation of ftrace_nmi_enter() and friends to
make ftrace_modify_code() NMI safe, particularly on SMP configurations.
For additional notes on the problems involved, see the comment below
ftrace_call_replace().
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds return_address.c to the -pg exclusion list, as this is the
building block for CALLER_ADDRx we do not want to profile this.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This enables us to build the dwarf unwinder both with modules enabled and
disabled in addition to reducing code size in the latter case. The
helpers are also consolidated, and modified to resemble the BUG module
helpers.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out the unwinder implementation and adds a new
return_address() abstraction modelled after the ARM code. The DWARF
unwinder is tied in to this, returning NULL otherwise in the case of
being unable to support arbitrary depths.
This enables us to get correct behaviour with the unwinder enabled,
as well as disabling the arbitrary depth support when frame pointers are
enabled, as arbitrary depths with __builtin_return_address() are not
supported regardless.
With this abstraction it's also possible to layer on a simplified
implementation with frame pointers in the event that the unwinder isn't
enabled, although this is left as a future exercise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Sync up with latest core changes in the syscalls tracing area:
- tracing: Map syscall name to number (syscall_name_to_nr())
- tracing: Call arch_init_ftrace_syscalls at boot
- tracing: add support tracepoint ids (set_syscall_{enter,exit}_id())
Taken from the s390 change.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This follows the ARM change, as SH had all of the same issues:
Make die() better match x86:
- add printing of the last accessed sysfs file
- ensure console_verbose() is called under the lock
- ensure we panic outside of oops_exit()
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Originally, dwarf_unwind_stack() was a recursive function and it seems
that some of the old comments were never updated.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
If we broke out of the while (1) loop because the return address of
"frame" was zero, then "frame" needs to be free'd before we return.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Pass a module's .eh_frame section to the DWARF unwinder at module load
time so that the section's FDEs and CIEs can be registered with the
DWARF unwinder. This allows us to unwind the stack through module code
when generating backtraces.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
In the multi-evt conversion for the SH-X3 proto CPU, IRLs were dropped
down to a single unique masking source, which ended up blowing up on
ILSEL-based IRQs which have special semantics that otherwise confuse the
intc code. While this does result in intc spewing about not having a
unique masking source, we don't really care.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.
Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
CONFIG_PMB will eventually allow the MMU to be switched between 29-bit
and 32-bit mode dynamically at runtime.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.
Likewise, replace a P1SEGADDR() use with virt_to_phys().
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Certain networking and USB workloads generate floods of these accesses,
so just disable it by default (thereby restoring the old behaviour). The
option remains configurable from userspace, and can still be used as a
debugging aid.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There's already code in do_page_fault() to conditionally enable
interrupts, so we don't need to unconditonally enable them before
calling it. This fixes a lockdep warning where we called
trace_hardirqs_off() but with irqs still enabled.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This bumps up the default I/O base to P2SEG, which allows legacy probing
to bail out gracefully rather than oopsing. Platforms that have a real
PIO offset still need to fix this up on their own, although most
platforms are content with P2SEG already.
The previous change to teach ioport_map() about >= P1SEG offsets in
combination with this patch allows both the already remapped and the
legacy address probing to pass through and succeed.
Fixes up an oops with i8042 on the sh7785lcr board.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the case where certain drivers already do their own
remapping and subsequently attempt to use the PIO calls for I/O. In this
case there is no additional remapping that needs to be done, and the
address can be casted in to the cookie directly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>