Lumiera
The new emerging NLE for GNU/Linux
You may call this text a vision, albeit a very moderate one.

The basic aproach taken by Cinelrra is quite modern and clean. I want to sketch what would be possible if we’d really stand up to this basic aproach, making it govern parts of the actual Design and Implementation.
Just about doing things that seem self-evident nowadays.

We could spend our time

  • just hunting and fixing bugs — or we could spend it making things reliable by design

  • just having a GUI and Features like “the others” — or we could spend it to embody and encompass these common ideas and structures within our internal design and implementation.

So — to motivate all those clean-up and restructuring efforts, I’ll lay out some Ideas that seem to me beeing quite at hand, and I’ll show how they could fit together. The first steps seem to be quite agreed upon. Possible further steps and possibilities just coming into vision at the moment. Feel free to propose and participate.

I think, we should prefer the bottom-up aproach of how a certain design could give rise to usefull features over the aproach of defining cool features and then figuring out how to implement them.

Builder Pattern

Completely separate building of the Render Pipeline and running of the Render Pipeline. The Nodes in this Pipeline should process Video/Audio and do nothing else. No more any decisions, tests and conditional operations when running the Pipeline. Move all of this out into the configuration of the pipeline, done by the Builder. Make the actual processing nodes Template classes, parametrised by the color model and number of components. Make all Nodes on equal footing of each other, able to be connected freely within the limitations of the necessary input and output. Make the OpenGL rendering into alternate implementation of some operations together with an alternate signal flow (usable only if the whole Pipeline can be built up to support this changed signal flow), thus facturing out all the complexities of managing the data flow between core and hardware accelerated rendering out of the implementation of the actual processing. Introduce separate control data connections for the automation data, separating the case of true multi-channel-effects from the case where one node just gets remote controlled by another node (or two nodes using the same automation data). Unfold any “internal-multi” effects into separate instances, e.g. the possibility of having an arbitrary number of single masks at any point of the pipeline instead of having one special masking facility encompassing multiple sub-masks.

Abstract Data-flow

Apply a moderate degree of abstraction to the data handling and data flows. Unify all data reading and writing and push it down into a data handling backend. Create a new data flow type “Intention and Hinting messages”, thus giving rise for far more clever media access and caching on all levels, without the need of the Object/Editing Subsystem doing any data flow optimizing decisions. Unify the control and automation data and make it a first class data flow. Encapsulate all interpolation and keying consideartions thus freeing the Render nodes of any need to access “keyframes” or to do sub-frame interpolation (e.g. within one Audio Frame) on their own. Automation data could as well be created within the Render Engine (e.g. by the motion tracker) and written into an automation track for further editing.

Self-contained Builder

Make the Builder a separate entity with cleary specified interfaces. One interface is towards the Rendering-Engine for loading a newly constructed Render Pipeline into the Engine or for hot-swapping parts of the Render Pipeline in case of modifications while doing playback/rendering, and for passing on “Intention and Hinting messages”. The other interface is towards the Object manipulation Subsystem, the EDLs and Session handling. Pass over this interface instructions what has to be done, but not how to do it. Concentrate structural and methodical optimizations within the Builder. He could rearrange or even “compile” the Render Pipeline, e.g. he could decide only to rende a region of some pipeline, because the rest will be discarded by the final projector and overlayed by another tracks output. This openes possibilities for further research and improvement without the need of hand written assembler or OpenGL code.

Clear Control Flow and Dependencies

Always try to get at a uniform Control Flow. Avoid mutual couplings and callbacks. Use Dependency Injection and Inversion of Control. For example, let the Builder make a complete set of decisions ready to be executed. Never call back from within the Render Engine to get additional Informations for “Plugin X” out of “Track N” in “the EDL”. Because this unneccessarily couples the Render Engine to internal Details of “the EDL”

Separate Edit Decisions and actual Render Configuration

Distinguish between the Objects the user places and manipulates within the Session/EDL and the Objects which comprise the Render Pipeline. Don’t take a “Plugin” out of “the EDL” and connect it into the Render Pipelene, be it bare or wrapped within a Virtual Node. Rather declare the user wants a placement of some Plugin (given by ID) at some Location, given absolutely by (Track,Frame) or relatively by “attached at Clip X”. Let the Builder retrieve the actual Plugin or the actual Clip (or a Dummy / Proxy), resolve everything to actual Positions and Render Pipe Numbers (corresponding to the Tracks) and finally wire the Objects together. Persist both the Placement requested by the user (this is the EDL) and the actual Assembly found to implement this placement. Give the ability to fix this Assembly to a certain degree, so things can’t be messed up by further editing. So the user can, say, lock a part of a larger project which is “done” and be content not the slightest detail will be changed when doing the final render some months later. (That’s the reason why I propose to persist the actual Assembly as well, so all of the editing work can be checked into a SCM system by the user. May seem silly at the moment, as most users don’t know the power of SCM, but this will change for sure).

Use High-Level techniques where appropriate

Stick to the proved and the reliable for the foundations of the system. No “new Languages” for the Backbone! But be open for advanced technology in cleanly isolated subsystems. For example, embed a Prolog engine and maybe even some Constraint solver. Use them within the Builder to do the decisions and use them to derive the actual Object Assembly out of the Placements given by the user. We could even make the internal Structure visible to a ceratin degree, giving the user a better “feel” on what is going on internally, because the internals could be expressed in a high-level manner not overloaded with implemention details this way.

Structured EDL Elements

Make the contents of the EDL — the things manipulated by the user — more structured. Because of Decoupling and Inversion of Control, there is no longer any need for a flat and simple structure along the lines of Tracks, Edits, Plugin-Sets, separate Automation keyframes. Things could be tied together directly. Edits could be placed to be sequential, Clips and Sound could be linked, Effects could be attached to the track or to groups of objects, automation will be attached to the Effects and move with them. Nothing revolutionary. I have seen several users getting angry and feeling betrayed if a System hinderes them doing such things seemingly natural todays. Thats the story with “Accidental Complexity” versus “Fundamental Complexity”. If I have to take care actively for keeping an audio clip placed in-sync, this is creating Accidental Complexity both for the user and for the programmer.

Support building up larger Structures

All this achievements together enable the building of larger structures and help maintaining them. If we separate Edit Decision Objects from Render Objects, we could provide “Meta Clips”, the ability to assemble seamlessly complete Timelines or Sub-Sessions for a larger project. We could extend the Asset management to be really helpful where it is needed, i.e. in larger projects. Building up a customized collection of Media, prepared Clips, really-needed Effects, preconfigured Snipets and using them uniformely in the assembly of individual Sessions comprising a larger project. Save as Templates or Macro Blocks, search for keywords, organize clips within bins and sub bins, integrate with external systems used to organize movie shooting.

Conclusion

None of this is there at the moment, and non of this will be there immediate. All of this together is way too much work as to be done by a single programmer or a small core team. We need colaboration to get at this level. But I don’t see any other viable aproach for getting there besides working up from the core in a structured manner.

(zuletzt geändert am 2007-06-10 18:34:44 durch MichaelPloujnikov)

​
​
​

Historical note

This text is a follow-up within a series of publications on pipapo.org, starting with »Cinelerra woes« in May 2007, and the Cinelerra-3 project proposal. Notably this third text from June 2007 contains already all the fundamental ideas of the Lumiera project, which emerged from the initial movement to improve Cinelerra and constituted itself as a separate project early in 2008.

I’m re-publishing this text here, closely following the original, roughly drafted form, as was preserved by a HTML capture early in 2008 — yet with markup transformed into Asciidoc.

Ichthyo

2025-09-13