@kognifai/cogsengine
    Preparing search index...

    Cogs Task

    Concurrency in Cogs.Core is achieved through the use of the TaskManager to create and manage task objects. Tasks are represented by function objects of the form std::function<void(void)> which lets the creator of the task manage the data implicitly or explicitly captured by the function or lambda used.

    To create and execute a simple task just call the TaskManager::enqueue() method of the task manager as such:

    ...
    // Create a simple lambda function, printing some text to the console.
    auto myTask = []() { printf("Hello Tasks!"); };
    
    // Enqueue our lambda in the global queue. It will be automatically executed when possible.
    context->taskManager->enqueue(TaskManager::GlobalQueue, myTask);
    

    Can be used from asynchronous task to execute remaining of the task in main thread. Typical use is to do asynchronous I/O, but handle the result of the I/O in the main thread to avoid multi-thread problems.

    DataFetcherManager::fetchAsync(context, url, [](std::unique_ptr data)
      {
        // Data is available in worker thread. Create a data parsing task lambda
        auto task = [context, data = data.release()](){
          parseData(context, data);
          delete data;
          context->engine->setDirty();
        };
    #ifdef __EMSCRIPTEN__
        // Cogs.js support - no threading
        task();
    #else
        context->engine->runTaskInMainThread(std::move(task));
    #endif
      }, fileOffset, fileByteSize);
    

    When queuing a task take care to choose which task queue to enter the new task into. Users can either create their own task queues through TaskManager::createQueue(), or use one of the built in queues:

    • GlobalQueue - generic global task queue
    • GeometryQueue - queue used by geometry-generating systems, this queue is flushed before rendering each frame to ensure the geometry is synchronized.
    • ResourceQueue - queue used by resource loading code, never synchronized, always runs in background.

    If needed, custom queues can be created like the following:

    ...
    // Create a simple lambda function, printing some text to the console.
    auto myTask = []() { printf("Hello Tasks!"); };
    
    // Create a custom queue.
    auto queueId = context->taskManager->createQueue("MyQueue");
    
    // Enqueue our lambda in the custom queue.
    auto taskId = context->taskManager->enqueue(queueId, myTask);
    
    // Wait for all tasks in the custom queue to finish.
    context->taskManager->waitAll(queueId);
    

    If synchronization of tasks is needed, the user may use the TaskManager::wait() method of the task manager to stop execution of the current scope until the task waited for is finished.

    To wait for a task, the user must keep the TaskId object returned by calls to enqueue() to pass to the wait() method:

    ...
    // Create a simple lambda function, printing some text to the console.
    auto myTask = []() { printf("Hello Tasks!\n"); };
    
    // Enqueue our lambda in the global queue, storing the returned TaskId.
    auto taskId = context->taskManager->enqueue(TaskManager::GlobalQueue, myTask);
    
    // Wait for our task to be completed.
    context->taskManager->wait(taskId);
    
    printf("Done!");
    ...
    
    Prints:
     Hello Tasks!
     Done!
    

    When working with tasks on platforms with thread support, care needs to be taken to avoid data races and sharing violations. Especially when working with resources (meshes, models, textures etc.) inside tasks, the developer needs to be aware of the limitations and behavior of resources on multiple threads.

    By default, the API of the ResourceManager is thread-safe. This means that the creation, state changes, and deletion of resources is thread safe. Keeping handles to resources alive and in scope also ensures the lifetime of the resource in the duration of said scope in a thread safe manner.

    However, modifying resource data is not thread-safe. For example, setting Mesh data during a task being executed by the task manager may lead to data corruption and crashing. To facilitate concurrent writing and reading of resource data the resource manager provides locking functionality, through the lock() and createLocked() methods. If strong guarantees can be given by the executing code that no data races will appear, resource modifications may take place in user tasks, but synchronization must then be handled explicitly through waiting or similar mechanisms.

    The locking methods can be used to modify resource data through proxy resources, which will be automatically synchronized with the original resource after exiting the scope.

    Example proxy usage:

    ...
    MeshHandle handle;
    
    // Our task modifying a Mesh instance.
    auto task = [=]() // Capture the handle and context by value
    {
      // Acquire a safe resource proxy from the resource manager.
      auto mesh = context->taskManager->lock(handle);
      
      // Safe modification of the Mesh resource.
      mesh->setVertexData(...);
      
      // When leaving scope, the changes to the Mesh are queued up and will be synchronized with the main thread
      // at a later time.
    };
    
    // Enqueue our task, which will be executed at some unknown point in time. May for example be executed during
    // the rendering phase of our current frame.
    context->taskManager->enqueue(TaskManager::GlobalQueue, task);
    
    // Get a pointer to the actual mesh resource.
    auto mesh = handle.resolve();
    
    // Safe read of data, mesh resource may or may not contain data set in our custom task.
    auto data = readDataFromMesh(mesh);
    

    While proxy usage works fine for long-running asynchronous tasks such as model or texture loading, it is often desired to modify resources in parallel and "real time" but retain safety. This can be achieved by manually synchronizing task execution, ensuring that all the tasks run to completion before returning to code that may access the same resources.

    Example parallel modification with explicit synchronization:

    ...
    // Assume this is executing on the main thread, for example during some update() call
    // in a component system.
    
    // A set of resources.
    std::vector meshes;
    
    // Create a custom queue.
    auto queueId = context->taskManager->createQueue("MeshQueue");
    
    for (auto & mesh : meshes) {
      context->taskManager->enqueue(queueId, [&]()
      {
        // Modify Mesh contents.
        mesh.resolve()->setData(calculateMeshData());
      };
    }
    
    // Wait until all tasks are finished before returning control to the main thread.
    context->taskManager->waitAll(queueId);