Lines Matching defs:from

475  * local CPU's rq->lock, it optionally removes the task from the runqueue and
479 * Task enqueue is also under rq->lock, possibly taken from another CPU.
480 * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
786 * Runs from hardirq context with interrupts disabled.
814 * called from hardirq (IPI) context
1080 * from an idle CPU. This is good for power-savings.
1137 * part of the idle loop. This forces an exit from the idle loop
1270 * E.g. going from 2->1 without going through pick_next_task().
1285 * Iterate task_group tree rooted at *from, calling @down when first entering a
1290 int walk_tg_tree_from(struct task_group *from,
1296 parent = from;
1310 if (ret || parent == from)
1543 * - the task specific clamp value, when explicitly requested from userspace
1579 * Tasks can have a task-specific value requested from user-space, track
1614 * When a task is dequeued from a rq, the clamp bucket refcounted by the task
2346 * If it changed from the expected state, bail out now.
2560 * away from this CPU, or CPU going down), or because we're
2910 * P1 *cannot* return from this set_cpus_allowed_ptr() call until P0 executes
3196 * is removed from the allowed bitmask.
3598 * XXX When called from select_task_rq() we only
4062 * Invoked from try_to_wake_up() to check whether the task can be woken up.
4096 * indicate success because from the regular waker's point of
4100 * from p::saved_state which ensures that the regular
4301 * from the runqueue.
4950 * This is *not* safe to call from within a preemption notifier.
5022 * This must be the very last reference to @prev from this CPU. After
5157 * fix up the runqueue lock - which gets 'carried over' from
5222 * @prev: the thread we just switched away from.
5234 * The context switch have flipped the stack from under us and restored the
5325 * @prev: the thread we just switched away from.
5378 if (prev->mm) // from user
5395 if (!prev->mm) { // from kernel
5437 * - from a non-preemptible section (of course)
5439 * - from a thread that is bound to a single CPU
5552 * and its field curr->exec_start; when called from task_sched_runtime(),
5586 * indistinguishable from the read occurring a few cycles earlier.
6024 * opportunity to pull in more work from other CPUs.
6562 * preemption from blocking on an 'sleeping' spin/rwlock. Note that
6602 * - in IRQ context, return from interrupt-handler to
6610 * - return from syscall or exception to user-space
6611 * - return from interrupt-handler to user-space
6649 * after coming from user-space, before storing to rq->curr; this
6937 * This is the entry point to schedule() from in-kernel preemption
6980 * from userspace or just about to enter userspace, a preempt enable
7050 * This is the entry point to schedule() from kernel preemption
7053 * protect us against recursive calling from irq.
7257 /* Avoid rq from going away on us: */
7282 * We have to be careful, if called from sys_setpriority(),
7551 * irq metric. Because IRQ/steal time is hidden from the task clock we
7910 /* Avoid rq from going away on us: */
7982 * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace.
8272 * detect the kernel's knowledge of attributes from the attr->size value
8649 * operations here to prevent schedule() from being called twice (once via
8781 * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
8919 * eligible task to run, if removing the yield() call from your code breaks
9546 * Invoked from a CPUs hotplug control thread after the CPU has been marked
9709 * Remove CPU from nohz.idle_cpus_mask to prevent participating in
9790 * either parked or have been unbound from the outgoing CPU. Ensure that
9805 * might have. Called from the CPU stopper task after ensuring that the
10086 * called from this thread, however somewhere below it might be,
10173 pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
10429 * Unlink first, to avoid walk_tg_tree_from() from finding us (via
11507 * nice level changed. I.e. when a CPU-bound task goes from nice 0 to
11511 * The "10% effect" is relative and cumulative: from _any_ nice level,
11603 * (TSA) Store to rq->curr with transition from (N) to (Y)
11605 * (TSB) Store to rq->curr with transition from (Y) to (N)
11611 * There is also a transition to UNSET state which can be performed from all
11622 * Scenario A) (TSA)+(TMA) (from next task perspective)
11692 * are not the last task to be migrated from this cpu for this mm, so
11742 * from lazy-put flag set to MM_CID_UNSET.
11859 * from lazy-put flag set to MM_CID_UNSET.