The recent switch of the Sibyte SOCs from the processor specific cache
managment code in c-sb1.c to c-r4k.c lost this old hack
[MIPS] Hack for SB1 cache issues
Removing flush_icache_page a while ago broke SB1 which was using an empty
flush_data_cache_page function. This glues things well enough so a more
efficient but also more intrusive solution can be found later.
Signed-Off-By: Thiemo Seufer <ths@networkno.de>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
in the hope it was no longer needed. As it turns it still is so resurrect
it until there is a better solution.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When GDB writes a breakpoint into address area of inferior process the
kernel needs to invalidate the modified memory in the inferior which
is done by calling flush_cache_page which in turns calls
r4k_flush_cache_page and local_r4k_flush_cache_page for VSMP or SMTC
kernel via r4k_on_each_cpu().
As the VSMP and SMTC SMP kernels for 34K are running on a single shared
caches it is possible to get away without interprocessor function calls.
This optimization is implemented in r4k_on_each_cpu, so
local_r4k_flush_cache_page is only ever called on the local CPU.
This is where the following code in local_r4k_flush_cache_page() strikes:
/*
* If ownes no valid ASID yet, cannot possibly have gotten
* this page into the cache.
*/
if (cpu_context(smp_processor_id(), mm) == 0)
return;
On VSMP and SMTC had a function of cpu_context() for each CPU(TC).
So in case another CPU than the CPU executing local_r4k_cache_flush_page
has not accessed the mm but one of the other CPUs has there may be data
to be flushed in the cache yet local_r4k_cache_flush_page will falsely
return leaving the I-cache inconsistent for the breakpoint.
While the issue was discovered with GDB it also exists in
local_r4k_flush_cache_range() and local_r4k_flush_cache().
Fixed by introducing a new function has_valid_asid which on MT kernels
returns true if a mm is active on any processor in the system.
This is relativly expensive since for memory acccesses in that loop
cache misses have to be assumed but it seems the most viable solution
for 2.6.23 and older -stable kernels.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This should help making bug reports for the gadzillion of cores with all
their configuration and synthesis options more useful.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On MP configurations it's highly dubious what this code will actually
affect since blasting away cachelines may or may not do the right
thing wrt. cache coherency.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
It may not be perfect yet but the SB1 code is badly borken and has
horrible performance issues.
Downside: This seriously breaks support for pass 1 parts of the BCM1250
where indexed cacheops don't work quite reliable but I seem to be the
last one on the planet with a pass 1 part anyway.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
r4k_flush_cache_all() and r4k_flush_cache_mm() case: these are noop if
the CPU did not have dc_aliases. It would mean we do not need to care
about icache here.
r4k_flush_cache_range case: if r4k_flush_cache_mm() did not need to
care about icache, it would be same for this function.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
With more recent compilers inline doesn't necessarily means a function
will always be inlined. So leave that decission to the compiler and
make the function as __init.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
c-r4k.c and c-sb1.c use drop_mmu_context() to flush virtually tagged
I-caches, but this does not work for flushing other task's icache. This
is for example triggered by copy_to_user_page() called from ptrace(2).
Use indexed flush for such cases.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Save the Config.OD bit from being clobbered by coherency_setup(). This
bit, when set, fixes various errata in the early steppings of Au1x00
SOCs. Unfortunately, the bit was write-only on the most early of them.
In addition, also restore the bit after a wakeup from sleep.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
A proper fix would involve introducing the notion of shared caches but
at this stage of 2.6.17 that's going to be too intrusive and not needed
for current hardware; aside I think some discussion will be needed.
So for now on the affected SMP configurations which happen to suffer from
cache aliases we make use of the fact that a single cache will be shared
by all processors. This solves the deadlock issue and will improve
performance by getting rid of the smp_call_function overhead.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Fix the cache index value in tx49_blast_icache32_page_indexed().
This is a damage by de62893bc0 commit.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The TX49XX has the prefetch instruction. It supports only Pref_Load
(hint 0). Actually changes in this patch except for Kconfig are not
have any effects, I added these changes to prevent misuse of unsupported
hints.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
If dcache_size != icache_size or dcache_size != scache_size, or
set-associative cache, icache/scache does not flushed properly. Make
blast_?cache_page_indexed() masks its index value correctly. Also,
use physical address for physically indexed pcache/scache.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When a CPU has no scache, the scache flushing functions currently
aren't getting initialized and the NULL pointer is eventually called
as a function. Initialize the scache flushing functions as a noop
when there's no scache.
Initial patch by me and most of the debugging done by Martin Michlmayr.
Signed-off-by: Martin Michlmayr <tbm@cyrius.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add blast_xxx_range(), protected_blast_xxx_range() etc. for common
use. They are built by __BUILD_BLAST_CACHE_RANGE().
Use protected_cache_op() macro for various protected_ routines.
Output code should be logically same.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
will also avoid smp_call_function from doing stupid things when called
from a CPU that is not yet marked online.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>