History log of /linux-master/arch/mips/kernel/pm-cps.c
Revision Date Author Comments
# 6d74e0fc 09-Feb-2024 Jiaxun Yang <jiaxun.yang@flygoat.com>

MIPS: pm-cps: Use GPR number macros

Use GPR number macros in uasm code generation parts to
reduce code duplication.

There are functional change due to difference in register
symbolic names between OABI and NABI, while existing code
is only using definitions from OABI.

Code pieces are carefully inspected to ensure register
usages are safe on NABI as well.

We changed register allocation of r_pcohctl from T7 to T8
as T7 is not available on NABI and we just want a caller
saved scratch register here.

Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


# bf929272 01-Oct-2019 Paul Burton <paulburton@kernel.org>

MIPS: barrier: Add __SYNC() infrastructure

Introduce an asm/sync.h header which provides infrastructure that can be
used to generate sync instructions of various types, and for various
reasons. For example if we need a sync instruction that provides a full
completion barrier but only on systems which have weak memory ordering,
we can generate the appropriate assembly code using:

__SYNC(full, weak_ordering)

When the kernel is configured to run on systems with weak memory
ordering (ie. CONFIG_WEAK_ORDERING is selected) we'll emit a sync
instruction. When the kernel is configured to run on systems with strong
memory ordering (ie. CONFIG_WEAK_ORDERING is not selected) we'll emit
nothing. The caller doesn't need to know which happened - it simply says
what it needs & when, with no concern for checking the kernel
configuration.

There are some scenarios in which we may want to emit code only when we
*didn't* emit a sync instruction. For example, some Loongson3 CPUs
suffer from a bug that requires us to emit a sync instruction prior to
each ll instruction (enabled by CONFIG_CPU_LOONGSON3_WORKAROUNDS). In
cases where this bug workaround is enabled, it's wasteful to then have
more generic code emit another sync instruction to provide barriers we
need in general. A __SYNC_ELSE() macro allows for this, providing an
extra argument that contains code to be assembled only in cases where
the sync instruction was not emitted. For example if we have a scenario
in which we generally want to emit a release barrier but for affected
Loongson3 configurations upgrade that to a full completion barrier, we
can do that like so:

__SYNC_ELSE(full, loongson3_war, __SYNC(rl, always))

The assembly generated by these macros can be used either as inline
assembly or in assembly source files.

Differing types of sync as provided by MIPSr6 are defined, but currently
they all generate a full completion barrier except in kernels configured
for Cavium Octeon systems. There the wmb sync-type is used, and rmb
syncs are omitted, as has been the case since commit 6b07d38aaa52
("MIPS: Octeon: Use optimized memory barrier primitives."). Using
__SYNC() with the wmb or rmb types will abstract away the Octeon
specific behavior and allow us to later clean up asm/barrier.h code that
currently includes a plethora of #ifdef's.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org


# 2874c5fd 27-May-2019 Thomas Gleixner <tglx@linutronix.de>

treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152

Based on 1 normalized pattern(s):

this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version

extracted by the scancode license scanner the SPDX license identifier

GPL-2.0-or-later

has been chosen to replace the boilerplate/reference in 3029 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>


# b2ed33a8 20-Feb-2018 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Block system suspend when a JTAG probe is present

If a JTAG probe is connected to a MIPS cluster, then the CPC detects it
and latches the CPC.STAT_CONF.EJTAG_PROBE bit to 1. While set,
attempting to send a power-down command to a core will be blocked, and
the CPC will instead send the core to clock-off state. This can
interfere with systems fully entering a low power state where all cores,
CM, GIC, etc are powered down.

Detect that a JTAG probe is / has been connected to the cluster and
block the suspend attempt.

Attempting to suspend the system while a JTAG probe is connected now
yields:
# echo mem > /sys/power/state
[ 11.654000] PM: Syncing filesystems ... done.
[ 11.658000] JTAG probe is connected - abort suspend
-sh: echo: write error: Operation not permitted
#

To restore suspend, the JTAG probe should be disconnected or put into
quiescent state. Platform code can then clear the
CPC.STAT_CONF.EJTAG_PROBE bit.

Reported-by: Ed Blake <ed.blake@sondrel.com>
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/18641/
Signed-off-by: James Hogan <jhogan@kernel.org>


# fb615d61 25-Oct-2017 Paul Burton <paulburton@kernel.org>

Update MIPS email addresses

MIPS will soon not be a part of Imagination Technologies, and as such
many @imgtec.com email addresses will no longer be valid. This patch
updates the addresses for those who:

- Have 10 or more patches in mainline authored using an @imgtec.com
email address, or any patches dated within the past year.

- Are still with Imagination but leaving as part of the MIPS business
unit, as determined from an internal email address list.

- Haven't already updated their email address (ie. JamesH) or expressed
a desire to be excluded (ie. Maciej).

- Acked v2 or earlier of this patch, which leaves Deng-Cheng, Matt &
myself.

New addresses are of the form firstname.lastname@mips.com, and all
verified against an internal email address list. An entry is added to
.mailmap for each person such that get_maintainer.pl will report the new
addresses rather than @imgtec.com addresses which will soon be dead.

Instances of the affected addresses throughout the tree are then
mechanically replaced with the new @mips.com address.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@mips.com>
Acked-by: Dengcheng Zhu <dengcheng.zhu@mips.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Matt Redfearn <matt.redfearn@mips.com>
Acked-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 48c834be 25-Oct-2017 Paul Burton <paulburton@kernel.org>

Update MIPS email addresses

MIPS will soon not be a part of Imagination Technologies, and as such
many @imgtec.com email addresses will no longer be valid. This patch
updates the addresses for those who:

- Have 10 or more patches in mainline authored using an @imgtec.com
email address, or any patches dated within the past year.

- Are still with Imagination but leaving as part of the MIPS business
unit, as determined from an internal email address list.

- Haven't already updated their email address (ie. JamesH) or expressed
a desire to be excluded (ie. Maciej).

- Acked v2 or earlier of this patch, which leaves Deng-Cheng, Matt &
myself.

New addresses are of the form firstname.lastname@mips.com, and all
verified against an internal email address list. An entry is added to
.mailmap for each person such that get_maintainer.pl will report the new
addresses rather than @imgtec.com addresses which will soon be dead.

Instances of the affected addresses throughout the tree are then
mechanically replaced with the new @mips.com address.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@mips.com>
Acked-by: Dengcheng Zhu <dengcheng.zhu@mips.com>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Matt Redfearn <matt.redfearn@mips.com>
Acked-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/17540/
Signed-off-by: James Hogan <jhogan@kernel.org>


# 6aa7de05 23-Oct-2017 Mark Rutland <mark.rutland@arm.com>

locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()

Please do not apply this to mainline directly, instead please re-run the
coccinelle script shown below and apply its output.

For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
preference to ACCESS_ONCE(), and new code is expected to use one of the
former. So far, there's been no reason to change most existing uses of
ACCESS_ONCE(), as these aren't harmful, and changing them results in
churn.

However, for some features, the read/write distinction is critical to
correct operation. To distinguish these cases, separate read/write
accessors must be used. This patch migrates (most) remaining
ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
coccinelle script:

----
// Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
// WRITE_ONCE()

// $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch

virtual patch

@ depends on patch @
expression E1, E2;
@@

- ACCESS_ONCE(E1) = E2
+ WRITE_ONCE(E1, E2)

@ depends on patch @
expression E;
@@

- ACCESS_ONCE(E)
+ READ_ONCE(E)
----

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: davem@davemloft.net
Cc: linux-arch@vger.kernel.org
Cc: mpe@ellerman.id.au
Cc: shuah@kernel.org
Cc: snitzer@redhat.com
Cc: thor.thayer@linux.intel.com
Cc: tj@kernel.org
Cc: viro@zeniv.linux.org.uk
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>


# e83f7e02 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: CPS: Have asm/mips-cps.h include CM & CPC headers

With Coherence Manager (CM) 3.5 information about the topology of the
system, which has previously only been available through & accessed from
the CM, is now also provided by the Cluster Power Controller (CPC). This
includes a new CPC_CONFIG register mirroring GCR_CONFIG, and similarly a
new CPC_Cx_CONFIG register mirroring GCR_Cx_CONFIG.

In preparation for adjusting functions such as mips_cm_numcores(), which
have previously only needed to access the CM, to also access the CPC
this patch modifies the way we use the various CPS headers. Rather than
having users include asm/mips-cm.h or asm/mips-cpc.h individually we
instead have users include asm/mips-cps.h which in turn includes
asm/mips-cm.h & asm/mips-cpc.h. This means that users will gain access
to both CM & CPC registers by including one header, and most importantly
it makes asm/mips-cps.h an ideal location for helper functions which
need to access the various components of the CPS.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17015/
Patchwork: https://patchwork.linux-mips.org/patch/17217/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# f875a832 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: Abstract CPU core & VP(E) ID access through accessor functions

We currently have fields in struct cpuinfo_mips for the core & VP(E) ID
of a particular CPU, and various pieces of code directly access those
fields. This patch abstracts such access by introducing accessor
functions cpu_core(), cpu_set_core(), cpu_vpe_id() & cpu_set_vpe_id()
and having code that needs to access these values call those functions
rather than directly accessing the struct cpuinfo_mips fields. This
prepares us for changes to the way in which those values are stored in
later patches.

The cpu_vpe_id() function is introduced even though we already had a
cpu_vpe_id() macro for a couple of reasons:

1) It's more consistent with the core, and future cluster, accessors.

2) It ensures a sensible return type without explicit casts.

3) It's generally preferable to use functions rather than macros.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17009/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 829ca2be 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: CPC: Use BIT/GENMASK for register fields, order & drop shifts

Tidy up asm/mips-cpc.h in a similar way to what "MIPS: CM: Use
BIT/GENMASK for register fields, order & drop shifts" did for
asm/mips-cm.h.

We use BIT() & GENMASK() to simplify the definition of register fields,
drop the _SHF definitions since that information can be found in the
_MSK ones, and then drop the _MSK suffix.

Fields definitions are moved to be next to the appropriate register
definition, making it easier to link the two & keep everything ordered
by register address. Comments are added including the name of each
register & a brief description of its purpose which helps to understand
what registers are for, link them back to hardware documentation or grep
for them.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17003/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 93c5bba5 12-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: CM: Use BIT/GENMASK for register fields, order & drop shifts

There's no reason for us not to use BIT() & GENMASK() in asm/mips-cm.h
when declaring macros corresponding to register fields. This patch
modifies our definitions to do so.

The *_SHF definitions are removed entirely - they duplicate information
found in the masks, are infrequently used & can be replaced with use of
__ffs() where needed.

The *_MSK definitions then lose their _MSK suffix which is now somewhat
redundant, and users are modified to match.

The field definitions are moved to follow the appropriate register's
accessor functions, which helps to keep the field definitions in order &
to find the appropriate fields for a given register. Whilst here a
comment is added describing each register & including its name, which is
helpful both for linking the register back to hardware documentation &
for grepping purposes.

This also cleans up a couple of issues that became obvious as a result
of making the changes described above:

- We previously had definitions for GCR_Cx_RESET_EXT_BASE & a phony
copy of that named GCR_RESET_EXT_BASE - a register which does not
exist. The bad definitions were added by commit 497e803ebf98 ("MIPS:
smp-cps: Ensure secondary cores start with EVA disabled") and made
use of from boot_core(), which is now modified to use the
GCR_Cx_RESET_EXT_BASE definitions.

- We had a typo in CM_GCR_ERROR_CAUSE_ERRINGO_MSK - we now correctly
define this as inFo rather than inGo.

Now that we don't duplicate field information between _SHF & _MSK
definitions, and keep the fields next to the register accessors, it will
be much easier to spot & prevent any similar oddities being introduced
in the future.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17001/
Patchwork: https://patchwork.linux-mips.org/patch/17216/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b7fc2cc5 23-Aug-2017 Paul Burton <paulburton@kernel.org>

MIPS: Declare various variables & functions static

We currently have various variables & functions which are only used
within a single translation unit, but which we don't declare static.
This causes various sparse warnings of the form:

arch/mips/kernel/mips-r2-to-r6-emul.c:49:1: warning: symbol
'mipsr2emustats' was not declared. Should it be static?

arch/mips/kernel/unaligned.c:1381:11: warning: symbol 'reg16to32st'
was not declared. Should it be static?

arch/mips/mm/mmap.c:146:15: warning: symbol 'arch_mmap_rnd' was not
declared. Should it be static?

Fix these & others by declaring various affected variables & functions
static, avoiding the sparse warnings & redundant symbols.

[ralf@linux-mips.org: Add Marcin's build fix.]

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/17176/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 161c51cc 02-Mar-2017 Paul Burton <paulburton@kernel.org>

MIPS: pm-cps: Drop manual cache-line alignment of ready_count

We allocate memory for a ready_count variable per-CPU, which is accessed
via a cached non-coherent TLB mapping to perform synchronisation between
threads within the core using LL/SC instructions. In order to ensure
that the variable is contained within its own data cache line we
allocate 2 lines worth of memory & align the resulting pointer to a line
boundary. This is however unnecessary, since kmalloc is guaranteed to
return memory which is at least cache-line aligned (see
ARCH_DMA_MINALIGN). Stop the redundant manual alignment.

Besides cleaning up the code & avoiding needless work, this has the side
effect of avoiding an arithmetic error found by Bryan on 64 bit systems
due to the 32 bit size of the former dlinesz. This led the ready_count
variable to have its upper 32b cleared erroneously for MIPS64 kernels,
causing problems when ready_count was later used on MIPS64 via cpuidle.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Fixes: 3179d37ee1ed ("MIPS: pm-cps: add PM state entry code for CPS systems")
Reported-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Tested-by: Bryan O'Donoghue <bryan.odonoghue@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable <stable@vger.kernel.org> # v3.16+
Patchwork: https://patchwork.linux-mips.org/patch/15383/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 73c1b41e 21-Dec-2016 Thomas Gleixner <tglx@linutronix.de>

cpu/hotplug: Cleanup state names

When the state names got added a script was used to add the extra argument
to the calls. The script basically converted the state constant to a
string, but the cleanup to convert these strings into meaningful ones did
not happen.

Replace all the useless strings with 'subsys/xxx/yyy:state' strings which
are used in all the other places already.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Link: http://lkml.kernel.org/r/20161221192112.085444152@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


# ba750502 14-Sep-2016 Paul Burton <paulburton@kernel.org>

MIPS: pm-cps: Generate idle state entry code when CPUs are onlined

The MIPS Coherent Processing System (CPS) power management code has
previously generated code used to enter low power idle states once
during boot for all CPUs. This has the drawback that if a CPU is present
in the system but not being used (for example due to the maxcpus kernel
parameter) then we encounter problems due to not having probed that CPU
for information about its type & properties. The result of this is that
we generate entry code which is both unused, potentially entirely
invalid & likely to be unsuitable for the CPU in question anyway.

Avoid this by generating idle state entry code only when a CPU is
brought online. This way we only ever generate code for CPUs that we
know we've probed the properties of, and that will actually be used.

[ralf@linux-mips.org: Resolve merge conflict.]

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14259/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 77451997 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Support CM3 changes to Coherence Enable Register

MIPS CM3 changed the management of coherence. Instead of a coherence
control register with a bitmask of coherent domains, CM3 simply has a
coherence enable register with a single bit to enable coherence of the
local core. Support this by clearing and setting this single bit to
disable / enable coherence.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Tony Wu <tung7970@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Nikolay Martynov <mar.kolya@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14226/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 929d4f51 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Add MIPSr6 CPU support

This patch adds support for CPUs implementing the MIPSr6 ISA to the CPS
power management code. Three changes are necessary:

1. In MIPSr6, coupled coherence is necessary when CPUS implement multiple
Virtual Processors (VPs).

2. MIPSr6 virtual processors are more like real cores and cannot yield
to other VPs on the same core, so drop the MT ASE yield instruction.

3. To halt a MIPSr6 VP, the CPC VP_STOP register is used rather than the
MT ASE TCHalt CP0 register.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14225/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 15ea26cf 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Remove selection of sync types

Instead of selecting an implementation or vendor specific sync type for
the required sync operations, always use the architecturally mandated
sync types which previous patches have put in place. The selection of
special sync types is now redundant an can be removed.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14223/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 90b084b1 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Use MIPS standard completion barrier

SYNC type 0 is defined in the MIPS architecture as a completion barrier
where all loads/stores in the pipeline before the sync instruction must
complete before any loads/stores subsequent to the sync instruction.

In places where we require loads / stores be globally completed, use the
standard completion sync stype.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14224/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 85e540be 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Use MIPS standard lightweight ordering barrier

Since R2 of the MIPS architecture, SYNC(0x10) has been an optional but
architecturally defined ordering barrier. If a CPU does not implement it,
the arch specifies that it must fall back to SYNC(0).

In places where we require that the instruction stream not be reordered,
but do not require that loads / stores are gloablly completed, use the
defined standard sync stype.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14221/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# f6b43d9354 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Update comments on barrier instructions

This code makes large use of barriers, which had quite vague
descriptions. Update the comments to make the choice of barrier and
reason for it more clear.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14220/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# b97d0b90 07-Sep-2016 Matt Redfearn <matt.redfearn@mips.com>

MIPS: pm-cps: Change FSB workaround to CPU blacklist

The check for whether a CPU required the FSB flush workaround
previously required every CPU not requiring it to be whitelisted. That
approach does not scale well as new CPUs are introduced so change the
default from a WARN and returning an error to just returning 0. Any CPUs
requiring the workaround can then be added to the blacklist.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
Reviewed-by: Paul Burton <paul.burton@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/14218/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 97f2645f 03-Aug-2016 Masahiro Yamada <yamada.masahiro@socionext.com>

tree-wide: replace config_enabled() with IS_ENABLED()

The use of config_enabled() against config options is ambiguous. In
practical terms, config_enabled() is equivalent to IS_BUILTIN(), but the
author might have used it for the meaning of IS_ENABLED(). Using
IS_ENABLED(), IS_BUILTIN(), IS_MODULE() etc. makes the intention
clearer.

This commit replaces config_enabled() with IS_ENABLED() where possible.
This commit is only touching bool config options.

I noticed two cases where config_enabled() is used against a tristate
option:

- config_enabled(CONFIG_HWMON)
[ drivers/net/wireless/ath/ath10k/thermal.c ]

- config_enabled(CONFIG_BACKLIGHT_CLASS_DEVICE)
[ drivers/gpu/drm/gma500/opregion.c ]

I did not touch them because they should be converted to IS_BUILTIN()
in order to keep the logic, but I was not sure it was the authors'
intention.

Link: http://lkml.kernel.org/r/1465215656-20569-1-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Stas Sergeev <stsp@list.ru>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Jiri Slaby <jslaby@suse.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: "Dmitry V. Levin" <ldv@altlinux.org>
Cc: yu-cheng yu <yu-cheng.yu@intel.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Will Drewry <wad@chromium.org>
Cc: Nikolay Martynov <mar.kolya@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Rafal Milecki <zajec5@gmail.com>
Cc: James Cowgill <James.Cowgill@imgtec.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Qais Yousef <qais.yousef@imgtec.com>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Mikko Rapeli <mikko.rapeli@iki.fi>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
Cc: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Kalle Valo <kvalo@qca.qualcomm.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Tony Wu <tung7970@gmail.com>
Cc: Huaitong Han <huaitong.han@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Juergen Gross <jgross@suse.com>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrea Gelmini <andrea.gelmini@gelma.net>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Rabin Vincent <rabin@rab.in>
Cc: "Maciej W. Rozycki" <macro@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


# 0f2a1484 02-Feb-2016 Markos Chandras <markos.chandras@imgtec.com>

MIPS: pm-cps: Avoid offset overflow on MIPSr6

This is similar to commit 934c79231c1b ("MIPS: asm: r4kcache: Add MIPS
R6 cache unroll functions"). The CACHE instruction has been redefined
for MIPSr6 and it reduced its offset field to 8 bits. This leads to
micro-assembler field overflow warnings when booting SMP MIPSr6 cores
like the following one:

Call Trace:
[<ffffffff8010af88>] show_stack+0x68/0x88
[<ffffffff8056ddf0>] dump_stack+0x68/0x88
[<ffffffff801305bc>] warn_slowpath_common+0x8c/0xc8
[<ffffffff80130630>] warn_slowpath_fmt+0x38/0x48
[<ffffffff80125814>] build_insn+0x514/0x5c0
[<ffffffff806ee134>] cps_gen_cache_routine.isra.3+0xe0/0x1b8
[<ffffffff806ee570>] cps_pm_init+0x364/0x9ec
[<ffffffff80100538>] do_one_initcall+0x90/0x1a8
[<ffffffff806e8c14>] kernel_init_freeable+0x160/0x21c
[<ffffffff8056b6a0>] kernel_init+0x10/0xf8
[<ffffffff801059f8>] ret_from_kernel_thread+0x14/0x1c

We fix this by incrementing the base register on every loop.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12329/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 92a76f6d 25-Feb-2016 Adam Buchbinder <adam.buchbinder@gmail.com>

MIPS: Fix misspellings in comments.

Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12617/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 4e88a862 09-Jul-2015 Markos Chandras <markos.chandras@imgtec.com>

MIPS: Add cases for CPU_I6400

Add a CPU_I6400 case to various switch statements, doing the same thing
as for CPU_P5600.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10635/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# c90e49f2 08-Jul-2014 Paul Burton <paulburton@kernel.org>

MIPS: {pm,smp}-cps: use cpu_vpe_id macro

When determining the VPE ID of a CPU, make use of the cpu_vpe_id macro
which will return 0 in a non-MT kernel build. Most code is already doing
so but a couple of places weren't. Fixing this prevents a build failure
for non-MT kernels where struct cpuinfo_mips does not contain the vpe_id
field:

arch/mips/kernel/pm-cps.c: In function 'cps_pm_enter_state':
arch/mips/kernel/pm-cps.c:153:51: error: 'struct cpuinfo_mips' has no
member named 'vpe_id'
vpe_cfg = &core_cfg->vpe_config[current_cpu_data.vpe_id];

arch/mips/kernel/smp-cps.c: In function 'wait_for_sibling_halt':
arch/mips/kernel/smp-cps.c:363:33: error: 'struct cpuinfo_mips' has no
member named 'vpe_id'
unsigned vpe_id = cpu_data[cpu].vpe_id;

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 064231e5 08-Jul-2014 Paul Burton <paulburton@kernel.org>

MIPS: pm-cps: Prevent use of mips_cps_* without CPS SMP

These symbols will not be defined when CONFIG_MIPS_CPS=n, but although
the CPS_PM_POWER_GATED state will never be used in that case the
compiler doesn't have enough information to figure that out. Add checks
which evaluate to a constant false for CONFIG_MIPS_CPS=n cases in order
to help the compiler out & eliminate the symbol references.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7278/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 7c5491b8 11-Jun-2014 Paul Burton <paulburton@kernel.org>

MIPS: pm-cps: convert smp_mb__*()

Commit 91bbefe6b0fc "arch,mips: Convert smp_mb__*()" replaced the
smp_mb__* functions with a simpler API, whilst commit 3179d37ee1ed
"MIPS: pm-cps: add PM state entry code for CPS systems" introduced
new uses of smp_mb__before_atomic_inc & smp_mb__after_clear_bit.
Replace those calls with the corresponding before & after atomic
functions of the new, simpler API in order to avoid a build failure:

arch/mips/kernel/pm-cps.c: In function 'coupled_barrier':
arch/mips/kernel/pm-cps.c:104:2: error: 'smp_mb__before_atomic_inc' is
deprecated (declared at include/linux/atomic.h:11)
[-Werror=deprecated-declarations]

arch/mips/kernel/pm-cps.c: In function 'cps_pm_enter_state':
arch/mips/kernel/pm-cps.c:161:2: error: 'smp_mb__after_clear_bit' is
deprecated (declared at include/linux/bitops.h:48)
[-Werror=deprecated-declarations]

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7086/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>


# 3179d37e 14-Apr-2014 Paul Burton <paulburton@kernel.org>

MIPS: pm-cps: add PM state entry code for CPS systems

This patch adds code to generate entry & exit code for various low power
states available on systems based around the MIPS Coherent Processing
System architecture (ie. those with a Coherence Manager, Global
Interrupt Controller & for >=CM2 a Cluster Power Controller). States
supported are:

- Non-coherent wait. This state first leaves the coherent domain and
then executes a regular MIPS wait instruction. Power savings are
found from the elimination of coherency interventions between the
core and any other coherent requestors in the system.

- Clock gated. This state leaves the coherent domain and then gates
the clock input to the core. This removes all dynamic power from the
core but leaves the core at the mercy of another to restart its
clock. Register state is preserved, but the core can not service
interrupts whilst its clock is gated.

- Power gated. This deepest state removes all power input to the core.
All register state is lost and the core will restart execution from
its BEV when another core powers it back up. Because register state
is lost this state requires cooperation with the CONFIG_MIPS_CPS SMP
implementation in order for the core to exit the state successfully.

The code will detect which states are available on the current system
during boot & generate the entry/exit code for those states. This will
be used by cpuidle & hotplug implementations.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>