OpenCV2 - Constant video stream, write file 1 minute before and after an event

G’day everyone,
I hope this is the correct place to ask this sort of question, I’m still quite new to Python.

Using OpenCV2, I can stream and write a video file from my webcam… I’d like to take this a step further, ideally by having a continuous video buffer (let’s say 1minute), and after an event happens (eg an input from a user will do for now), write a video file 1minute before and after this event… is this possible? My googling has come short, I suspect I’m using the wrong terminology.

Cheers,
Brad

Yes, it’s definitely possible! But rather than recreating it from scratch, I would recommend looking into OBS Studio. It has a “replay buffer” concept that is pretty much exactly what you’re talking about, and you can signal it to save that buffer as a video file, while continuing to record to the buffer as it goes. OBS can be controlled from Python fairly easily using a websocket.

G’day Chris,
Thanks for your reply. I hadn’t considered offloading the video stream to another process, it’s not a bad idea and would save on my coding, but it might affect portability.

Something I’ll need to take into account is this will end up on a headless server (something similar to a Jetson Nano or Raspberry Pi4… I’ve got a few old boards stashed away in a draw). Whatever the video streaming service I use would need to operate without a local user logging in or a GUI, the event to trigger the recording will be via GPIO pins and a button.

Can you do it frame-by-frame?

import cv2

capture = cv2.VideoCapture(...)
frames = []
while True:
    ret, frame = capture.read()
    frames.append(frame)
    if too_many_frames():
       frames.pop(0)
    if special_user_event():
        capture_one_more_minute_of_frames()
        write_to_disk(frames)

Ah, that makes it a bit harder. I’m not sure what would be best there.

Something worth noting: If your trigger “capture the last minute” doesn’t need to be absolutely precise, it might be okay to capture short snippets of video (1-5 seconds) and have a buffer of those snippets. When you trigger a capture, it takes the last N clips and concatenates them. There’s a tradeoff between precision of trigger and management of clips.

I’ve been doing more reading today and came past the term “Ring Buffer” which I believe is the same concept you’re describing. It doesn’t need to be exact, it’s just for reviewing purposes… this will mean later I won’t need to parse through hours of video looking for what I’m after :slight_smile:

I’m currently considering implementing this as 3 separate processes:

  1. Recording and managing the Ring Buffer (probably 10 second clips seems managable)
  2. A listening process for the ‘trigger’, concatenates the video together and adds some overlay info
  3. A folder watching process, looking for the concatenated files and uploads them to a webserver, on successful upload, removes from storage

Yes, exactly!

That sounds like a pretty decent plan. The choice of file format (and specifically video encoding) for the ring buffer has to strike a balance between file size, quality, and ability to concatenate sections conveniently.

You make it look so easy :slight_smile: LOL
Originally I was thinking along very similar lines, just didn’t know how to implement the logic.

I’ve recently discovered the concept of a “ring buffer” which seems more flexible in my situation, but I really do appreciate the time put into responding

1 Like

Ahhh, yes… I read that certain formats of video couldn’t be joined in sections (something about mp4 can’t, MTS can…?) I’ll make sure I pay attention to that so I don’t get caught out.

Yep. I’d advise giving this page a read:

https://trac.ffmpeg.org/wiki/Concatenate#protocol

And then doing some experimentation.