#include <task.h>
Inheritance diagram for tbb::task:
Public Types | |
typedef internal::affinity_id | affinity_id |
An id as used for specifying affinity. | |
executing | |
task is running, and will be destroyed after method execute() completes. | |
reexecute | |
task to be rescheduled. | |
ready | |
task is in ready pool, or is going to be put there, or was just taken off. | |
allocated | |
task object is freshly allocated or recycled. | |
freed | |
task object is on free list, or is going to be put there, or was just taken off. | |
recycle | |
task to be recycled as continuation | |
enum | state_type { executing, reexecute, ready, allocated, freed, recycle } |
Enumeration of task states that the scheduler considers. More... | |
Public Member Functions | |
virtual | ~task () |
Destructor. | |
virtual task * | execute ()=0 |
Should be overridden by derived classes. | |
internal::allocate_continuation_proxy & | allocate_continuation () |
Returns proxy for overloaded new that allocates a continuation task of *this. | |
internal::allocate_child_proxy & | allocate_child () |
Returns proxy for overloaded new that allocates a child task of *this. | |
void __TBB_EXPORTED_METHOD | destroy (task &t) |
Destroy a task. | |
void | recycle_as_continuation () |
Change this to be a continuation of its former self. | |
void | recycle_as_safe_continuation () |
Recommended to use, safe variant of recycle_as_continuation. | |
void | recycle_as_child_of (task &new_parent) |
Change this to be a child of new_parent. | |
void | recycle_to_reexecute () |
Schedule this for reexecution after current execute() returns. | |
intptr_t | depth () const |
void | set_depth (intptr_t) |
void | add_to_depth (int) |
void | set_ref_count (int count) |
Set reference count. | |
void | increment_ref_count () |
Atomically increment reference count and returns its old value. | |
int | decrement_ref_count () |
Atomically decrement reference count and returns its new value. | |
void | spawn_and_wait_for_all (task &child) |
Similar to spawn followed by wait_for_all, but more efficient. | |
void __TBB_EXPORTED_METHOD | spawn_and_wait_for_all (task_list &list) |
Similar to spawn followed by wait_for_all, but more efficient. | |
void | wait_for_all () |
Wait for reference count to become one, and set reference count to zero. | |
task * | parent () const |
task on whose behalf this task is working, or NULL if this is a root. | |
void | set_parent (task *p) |
sets parent task pointer to specified value | |
task_group_context * | context () |
This method is deprecated and will be removed in the future. | |
task_group_context * | group () |
Pointer to the task group descriptor. | |
bool | is_stolen_task () const |
True if task was stolen from the task pool of another thread. | |
state_type | state () const |
Current execution state. | |
int | ref_count () const |
The internal reference count. | |
bool __TBB_EXPORTED_METHOD | is_owned_by_current_thread () const |
Obsolete, and only retained for the sake of backward compatibility. Always returns true. | |
void | set_affinity (affinity_id id) |
Set affinity for this task. | |
affinity_id | affinity () const |
Current affinity of this task. | |
virtual void __TBB_EXPORTED_METHOD | note_affinity (affinity_id id) |
Invoked by scheduler to notify task that it ran on unexpected thread. | |
void __TBB_EXPORTED_METHOD | change_group (task_group_context &ctx) |
Moves this task from its current group into another one. | |
bool | cancel_group_execution () |
Initiates cancellation of all tasks in this cancellation group and its subordinate groups. | |
bool | is_cancelled () const |
Returns true if the context has received cancellation request. | |
bool | is_cancelled () const |
void | set_group_priority (priority_t p) |
Changes priority of the task group this task belongs to. | |
priority_t | group_priority () const |
Retrieves current priority of the task group this task belongs to. | |
Static Public Member Functions | |
static internal::allocate_root_proxy | allocate_root () |
Returns proxy for overloaded new that allocates a root task. | |
static internal::allocate_root_with_context_proxy | allocate_root (task_group_context &ctx) |
Returns proxy for overloaded new that allocates a root task associated with user supplied context. | |
static void | spawn_root_and_wait (task &root) |
Spawn task allocated by allocate_root, wait for it to complete, and deallocate it. | |
static void | spawn_root_and_wait (task_list &root_list) |
Spawn root tasks on list and wait for all of them to finish. | |
static void | enqueue (task &t) |
Enqueue task for starvation-resistant execution. | |
static void | enqueue (task &t, priority_t p) |
Enqueue task for starvation-resistant execution on the specified priority level. | |
static task &__TBB_EXPORTED_FUNC | self () |
The innermost task being executed or destroyed by the current thread at the moment. | |
Protected Member Functions | |
task () | |
Default constructor. | |
Friends | |
class | interface5::internal::task_base |
class | task_list |
class | internal::scheduler |
class | internal::allocate_root_proxy |
class | internal::allocate_root_with_context_proxy |
class | internal::allocate_continuation_proxy |
class | internal::allocate_child_proxy |
class | internal::allocate_additional_child_of_proxy |
typedef internal::affinity_id tbb::task::affinity_id |
An id as used for specifying affinity.
Guaranteed to be integral type. Value of 0 means no affinity.
Enumeration of task states that the scheduler considers.
executing | task is running, and will be destroyed after method execute() completes. |
reexecute | task to be rescheduled. |
ready | task is in ready pool, or is going to be put there, or was just taken off. |
allocated | task object is freshly allocated or recycled. |
freed | task object is on free list, or is going to be put there, or was just taken off. |
recycle | task to be recycled as continuation |
internal::allocate_continuation_proxy& tbb::task::allocate_continuation | ( | ) | [inline] |
Returns proxy for overloaded new that allocates a continuation task of *this.
The continuation's parent becomes the parent of *this.
bool tbb::task::cancel_group_execution | ( | ) | [inline] |
Initiates cancellation of all tasks in this cancellation group and its subordinate groups.
void __TBB_EXPORTED_METHOD tbb::task::change_group | ( | task_group_context & | ctx | ) |
Moves this task from its current group into another one.
Argument ctx specifies the new group.
The primary purpose of this method is to associate unique task group context with a task allocated for subsequent enqueuing. In contrast to spawned tasks enqueued ones normally outlive the scope where they were created. This makes traditional usage model where task group context are allocated locally on the stack inapplicable. Dynamic allocation of context objects is performance inefficient. Method change_group() allows to make task group context object a member of the task class, and then associate it with its containing task object in the latter's constructor.
task_group_context* tbb::task::context | ( | ) | [inline] |
This method is deprecated and will be removed in the future.
Use method group() instead.
int tbb::task::decrement_ref_count | ( | ) | [inline] |
Atomically decrement reference count and returns its new value.
Has release semantics.
void __TBB_EXPORTED_METHOD tbb::task::destroy | ( | task & | t | ) |
Destroy a task.
Usually, calling this method is unnecessary, because a task is implicitly deleted after its execute() method runs. However, sometimes a task needs to be explicitly deallocated, such as when a root task is used as the parent in spawn_and_wait_for_all.
static void tbb::task::enqueue | ( | task & | t | ) | [inline, static] |
Enqueue task for starvation-resistant execution.
The task will be enqueued on the normal priority level disregarding the priority of its task group.
The rationale of such semantics is that priority of an enqueued task is statically fixed at the moment of its enqueuing, while task group priority is dynamic. Thus automatic priority inheritance would be generally a subject to the race, which may result in unexpected behavior.
Use enqueue() overload with explicit priority value and task::group_priority() method to implement such priority inheritance when it is really necessary.
void tbb::task::increment_ref_count | ( | ) | [inline] |
Atomically increment reference count and returns its old value.
Has acquire semantics
virtual void __TBB_EXPORTED_METHOD tbb::task::note_affinity | ( | affinity_id | id | ) | [virtual] |
Invoked by scheduler to notify task that it ran on unexpected thread.
Invoked before method execute() runs, if task is stolen, or task has affinity but will be executed on another thread.
The default action does nothing.
void tbb::task::recycle_as_continuation | ( | ) | [inline] |
Change this to be a continuation of its former self.
The caller must guarantee that the task's refcount does not become zero until after the method execute() returns. Typically, this is done by having method execute() return a pointer to a child of the task. If the guarantee cannot be made, use method recycle_as_safe_continuation instead.
Because of the hazard, this method may be deprecated in the future.
void tbb::task::recycle_as_safe_continuation | ( | ) | [inline] |
Recommended to use, safe variant of recycle_as_continuation.
For safety, it requires additional increment of ref_count. With no descendants and ref_count of 1, it has the semantics of recycle_to_reexecute.
void tbb::task::recycle_to_reexecute | ( | ) | [inline] |
Schedule this for reexecution after current execute() returns.
Made obsolete by recycle_as_safe_continuation; may become deprecated.
void tbb::task::spawn_root_and_wait | ( | task_list & | root_list | ) | [inline, static] |
Spawn root tasks on list and wait for all of them to finish.
If there are more tasks than worker threads, the tasks are spawned in order of front to back.
void tbb::task::wait_for_all | ( | ) | [inline] |
Wait for reference count to become one, and set reference count to zero.
Works on tasks while waiting.