Description
A discussion topic regarding the current implementation of the "tools" in the example application.
Goal
Overcome limitations of the current implementation.
- Tools are called every frame, even when not "active"
- Tools are free functions, to avoid issues with type when storing a
_activeTool
in the application - Tools do not support multiple inputs in parallel, e.g. wacom tablet + mouse movements
- Tools need metadata, like a label and type, which are currently independent
- Tools can only manipulate the next frame, i.e. does nothing (useful) during pause
- Tools could be entities, but are not
- Inputs are assigned to entities being manipulated, should maybe be assigned to the tool instead?
Overview of Current Implementation
The user interacts with the world using "tools".
A tool doesn't immediately modify any data, instead a tool generates events. Events are then interpreted by the application - via an "event handler" - which in turn modify your data. The goal is being able to play back the events and reproduce exactly what the user did with each tool.
Tool Event Application
_
| | _
| |------>| | _
| | | |------>| |
|_| | | | |
|_| | |
| |
| |
|_|
The application has a notion of an "Active Tool" which is called once per frame.
sequentity/Example/Source/main.cpp
Lines 615 to 617 in f5196fb
The tool does nothing, unless there is an entity with a particular component called Active
along with some "input".
sequentity/Example/Source/Tools.inl
Lines 140 to 144 in f5196fb
The Active
component is assigned by the UI layer, in this case ImGui, whenever an entity is clicked.
sequentity/Example/Source/main.cpp
Lines 571 to 583 in f5196fb
These are the three states of any entity able to be manipulated with a Tool.
Activated
entity has transitioned from being passive to active, this happens onceActive
entity is activated, and being manipulated (e.g. dragged)Deactivated
entity transitioned from active back to passive, happens once
Thoughts
Overall I'm looking for thoughts on the current system, I expect similar things have been done lots of times, perhaps with the exception of wanting to support (1) multiple inputs in parallel, e.g. two mice and (2) wanting the user to ultimately assign an arbitrary input to arbitrary tools, e.g. swap from the mouse affecting position to the Wii controller.
Ultimately, the application is meant to facilitate building of a physical "rig", where you have a number of physical input devices, each affecting some part of a character or world. Like a marionettist and her control bar.
Code wise, there are a few things I like about the current implementation, and some I dislike.
Likes
- Tristate I like the
Activated
,Active
andDeactivated
aspect; I borrowed that from ImGui which seem to work quite nicely and is quite general. - Events rule I like that tools only have a single responsibility, which is to generate events. Which means I could generate events from anywhere, like from mouse move events directly, and it would integrate well with the sequencer and application behavior. =
- Encapsulation I also like that because events carry the final responsibility, manipulating events is straightforward and intuitive, and serialising those to disk is unambiguous.
- Generic inputs And I like how inputs are somewhat general, but I'll touch on inputs in another issue to keep this one focused on tools.
Dislikes
- UI and responsibility I don't like the disconnect between how
Active
is assigned from the UI layer - Inputs on the wrong entities I don't like how inputs e.g.
InputPosition2D
are associated with entities like "hip", "leftLeg" etc. rather than a tool itself, which seems more intuitive.