Workflow : A lynchian Approach with a vj overhead

During this incredibly busy week (writing at 4:35 AM on Monday!), we’re preparing for an exciting David Lynch tribute performance at a great venue. This follows last week’s 3:30 AM underground techno/noise gig, which marked our first test of a new JavaScript-based visual system.

Our setup featured dual projectors—one displaying our coding UI with interactive elements, and the other showing a fully generative environment capable of blending three sources (shaders, videos, generative content). The system responded to our code triggers and audio impulses, creating surprisingly diverse output.

For the Lynch tribute, we’ve already completed significant audio groundwork. We sampled most of Lynch’s films, albums, and soundtracks, and enhanced FoxDot with special functions to improve our loop-based performance style. Of course, simply playing loops would quickly become monotonous, so we’ve built in variations (more on the sound approach later).

For visuals, I initially considered creating an entirely new system for this specific performance—partly out of frustration with previous attempts. Sometimes starting fresh feels more productive than improving complex existing code.

Several factors changed my approach:

  • Our JavaScript video player issues appear largely resolved

  • The generative approach made increasing sense

  • Time constraints were significant

  • Davinci’s auto sequence detection capability proved valuable

  • Positive reception of our previous visual work

  • Consistency with our established aesthetic

With AI assistance, I developed a Python program that processes movie files and outputs detected sequences as separate MP4s. It’s GPU-intensive but effective! I added filters to remove sequences that were too short, included tolerance controls and encoding options.

The next step is simply running this on all available Lynch films, generating hundreds of sequences to feed into our existing video/generative software, and then doing some manual selecting,sorting, organizing. Using sound triggers and conditionals, we can scrub within videos, play clips with similar filenames (staying within one film), or randomize everything.

Eventually, I plan to develop a more sophisticated system to categorize videos by shape, intensity, optical flow, and metadata to create enhanced logic, but with Wednesday’s performance approaching, efficiency remains paramount.

Part II - Video Selector

As the extractor makes a lot of sequences, i needed a tool to make selection, check the loop quality.
A new tool was founded the “video selector” (I’ll name the tools better one day :) !)

This is a simple Tkinter app that displays all loops in a directory, allows me to set a playback speed and check the videos. If I like a video, I select it and it’s moved in a subdirectory.
Allowing quick skimming all created sequences

Part III : Enhancing video sequencing by looping

Coming back to my first approach of using IA to detect video sequences, I tried another approach,
this time using optical flow to find similar frames at set min and max duration from start point.
I first process the video in 1500 frames chunks to alleviate the memory footprint.
When a loop is detected it blends the start and the end of a loop using a transition time (15 frames by default) creating seamless loops.

Next step will be to merge all three tools to make a video toolbox that allows

  • cut videos into sequences using IA detection or optical flow
  • make seamless loops automatically
  • have a user interface to allow quick removing of non viable videos and keeping of the good ones

### Add additional features :

  • make categories to organize videos along themes, depending on the theme of the live concert
  • make some automatically conversion into viable mp4 videos with specific ffmpeg conversion options.
  • rename all files in a directory according to set rules.