In an effort (potentially a stupid effort at this point) to save some cash and have some flexibility for future expansion and changes, back in 2024 I decided to purchase a Raspberry Pi 4B with 2GB RAM to act as a viewer for my home CCTV System. Part of the goal especially longer term was knowing I would likely adopt something such as HomeAssistant and want my CCTV Feeds among other data to be able to be displayed as a dashboard in my office. It's now April 2025 I've remembered I had this Pi sat on my desk for longer than it probably should have been and some time spare to try to get it up and running so I don't need to run my full home desktop just to have my cameras visible while I'm in my home office.
The ultimate goal for this project is to have a wall mounted monitor (re-used from an upgrade I did to my main desktop setup about 6 months ago) displaying a subset of my home security cameras, ideally with a similar or the same layout to the one I had created within Unifi's own web interface. This blog post is going to be a bit long and where possible is packed with technical detail for anyone considering a similar solution. I'm going to cover off a number of things I tried as some of this might be useful to others either for building a similar system to this or just because I found all sorts of annoying out of date documentation when trying to figure this out myself!
I will also note here, the overall conclusion for my hardware and use-case is not to make use of the Pi for this use-case, unfortunately I ran into too many issues that I couldn't overcome but I hope this blog will help if you're currently thinking of something similar!
The Initial Attempt
Attempting to get the Unifi Web Dashboard to launch on my Pi
I had hoped the easiest solution would be to simply load up the Unifi NVR dashboard on the Pi, log in with some dedicated, restricted-privilege credentials, and re-use the exact setup I had been using on my home PC just running it on the Pi instead. The main issue I anticipated was the inevitable requirement to re-enter credentials at some point when they expired, but I figured there would be a solution to that problem and I'd cross that bridge when I got to it.
When looking through the Raspberry Pi Imaging tool I came across an OS called FullScreenOS which appeared to do exactly what I needed, load up a webpage full screen and do nothing else, being the type of engineer that has no desire to ground-up create something for what I expected to be a fairly simple use-case I ended up flashing the latest stable version of this onto the MicroSD card and off I went.
This proved to be my first mistake. I didn’t notice it at the time, but the current stable version of the OS was fairly out of date and didn’t boot properly on a fresh install. After a quick read through the GitHub repo, I found that the recommendation for the Pi 4 was to use the nightly build.
I thought, no problem—I’ll flash the nightly build. (It turned out there were two versions for the Pi 4, so I chose the newest one.)
I made some great progress here. Once it was flashed, I powered up the Pi and got to a web browser that actually worked—success! Using VNC (because I really couldn’t be bothered to find a spare USB keyboard and mouse for this), I added another public URL and confirmed that there was full internet connectivity via the Ethernet cable I had plugged in—it could load pages without issue.
I had high hopes of just copy-pasting the direct URL for the dashboard page and, when prompted, entering login credentials. Easy, right?
As it transpired, easy it was not. For reasons I still can't quite figure out, the page just wouldn't load, and the error message provided didn’t help me narrow down the actual fault. It didn’t seem to matter whether I used the local IP address of the NVR directly (which it did have access to—I checked!) or Unifi’s web-based cloud viewer that connects over the internet. I wasn’t able to figure out why this didn’t work, but I could open a new Chromium window and manually browse to both URLs and view the feeds without issue.
At that point, I realised this was going to be more trouble than it was worth. I couldn’t be sure whether the issue was with my configuration or something strange related to running a nightly, relatively untested image—so I went back to Google searching.
All things lead to RTSP
And a whole different world of pain and suffering...
After quite a bit of google searching and a load of scrolling through the Unifi forums I kept finding a common theme, various projects to display RTSP feeds but all the projects and a lot of the documentation was anywhere from 3 years to nearly 12 years old that I was finding, with almost all of it being out of date in some way shape or form which made it rather hard to piece together any sort of remotely working solution. One major issue I kept running into was the deprecation of a piece of software called omxplayer which was a core component for the actual playback of just about every guide I could find on how to setup a view using RTSP. I should also note here while the project does still exist and you can still install it on older OS's this wasn't a path I wanted to go down, the idea of running out of date software and operating systems which I know would not get further security patching didn't sit right for me and so the idea was rapidly written off and I needed to find a good alternative solution.
When doing some research I found an open source project, OpenSurv which appeared to do exactly what I wanted and did a lot of the annoying complicated parts of the setup for me, result! I managed to get it installed pretty easily following the instructions on their Github but ran into errors immediately. The first issue I ran into was the service wouldn't start properly on my Pi 4, doing some diving through the Github repo I stumbled across a discussion post that highlighted the missing aspect, a python-yaml apt package which caused everything to just not startup properly, I ran sudo apt install python3-yaml
and away I went, restarted the Pi and the example setup was working as expected.
The next challenge was getting the Unifi camera RTSP feeds to show up at all, it turns out Unifi gives you an RTSPS link which seems to not be a totally agreed / real thing as far as my limited googling could tell, and is generally just not supported. I tested it on my local desktop using Windows 11 and VLC and confirmed I couldn't stream the video which at least ruled out it being an issue with my Pi's configuration and I was pretty confident it was an issue with the stream URL or the stream itself. An unreasonable amount of late night googling later and I found the problem. Unifi links will look something like this rtsps://192.168.0.1:7441/LGRLugxf2psmWBuJ?enableSrtp
which as far as I can tell, no applications can actually do anything with. The trick is to change it from rtsps
to rtsp
change the port from 7441
to 7447
and delete the query string at the end that contains ?enableSrtp
at which point VLC or your application of choice should be able to connect to the stream without any issues.
The next challenge I ran into was the result of some outdated examples on the Github repository, OpenSurv configuration is defined through a series of YAML files and one of the example files looks a little bit like this at the time I found it
#THIS IS A YAML FILE, INDENTATION IS IMPORTANT. ALSO DO NOT USE TABS FOR INDENTATION, BUT USE SPACES
#Note2 demo is with image urls but you can do the same with camera streams or any other streams Opensurv supports,
#You can also mixed types of streams on one screen
#This is an demo how to configure demo layout 1 on a monitor with resolution 1920x1080
essentials:
screens:
- camera_streams:
- url: "https://images.opensurv.net/demo.png"
imageurl: true
force_coordinates: [600, 0, 1920, 800]
- url: "https://images.opensurv.net/demo.png"
imageurl: true
force_coordinates: [0, 0, 600, 400]
- url: "https://images.opensurv.net/demo.png"
imageurl: true
force_coordinates: [0, 400, 600, 800]
- url: "https://images.opensurv.net/demo.png"
imageurl: true
force_coordinates: [0, 800, 600, 1080]
- url: "https://images.opensurv.net/demo.png"
imageurl: true
force_coordinates: [600, 800, 1260, 1080]
- url: "https://images.opensurv.net/demo.png"
imageurl: true
force_coordinates: [1260, 800, 1920, 1080]
As it happens, this isn't valid any more and there have been a series of changes to the configuration since this was written which makes this fail in the background when trying to load, not knowing this I plugged in my RTSP feed URL's, deleted the imageurl: true
lines and tried to get it running, but I was back to square one with a login screen I wasn't expecting to see. After digging through the stock YAML files that ship with the software including the example monitor1.yaml
file that is used to create the default configuration I found the issue, camera_streams:
is no longer valid and it's now simply streams
. I have sent a pull-request to the project which has been merged to make that example configuration accurate and valid so hopefully you wont' run into this issue if you try to follow in my steps!
I replaced camera_streams:
with streams:
and my final config for the monitor1.yaml
file looked something like this:
essentials:
screens:
- streams:
# Camera 1
- url: "rtsp://192.168.0.1:7447/LGRLugxf2psmWBuJ"
# Camera 2
- url: "rtsp://192.168.0.1:7447/ABRLugxf2psmWBu3"
# Camera 3
- url: "rtsp://192.168.0.1:7447/CDRLugxf2psmWBuJ"
# Camera 4
- url: "rtsp://192.168.0.1:7447/EFRLugxf2psmWBuJ"
I restarted the service following the instructions on the git repo (a simple systemctl restart lightdm.service
) and I started to see my cameras appear, result job done let's call it a day.
Then I started to notice the cameras stutter and noticed the time stamps on the cameras were not in sync, weird but these things happen I guess, I tried adding some custom settings like the Github repo suggested including adding freeform_advanced_mpv_options: "--rtsp-transport=tcp
as a new line under each of my streams to look something a bit like this:
essentials:
screens:
- streams:
# Camera 1
- url: "rtsp://192.168.0.1:7447/LGRLugxf2psmWBuJ"
freeform_advanced_mpv_options: "--rtsp-transport=tcp
# Camera 2
- url: "rtsp://192.168.0.1:7447/ABRLugxf2psmWBu3"
freeform_advanced_mpv_options: "--rtsp-transport=tcp
# Camera 3
- url: "rtsp://192.168.0.1:7447/CDRLugxf2psmWBuJ"
freeform_advanced_mpv_options: "--rtsp-transport=tcp
# Camera 4
- url: "rtsp://192.168.0.1:7447/EFRLugxf2psmWBuJ"
freeform_advanced_mpv_options: "--rtsp-transport=tcp
This change did seem to help a little bit, but ultimately the system still experienced poor framerates and stuttering. I logged back into the Pi over SSH and, using htop
, found that the CPU was at 100%, and the load averages were high enough that it was never going to catch up.
I noticed that despite using a 1080p screen and having four cameras, each individual feed was being resized to 1920x1080. I had hoped that by reducing this down to the stream quality coming from the Unifi NVR, it might improve performance by avoiding any unnecessary smart resizing or similar processing, which would ideally reduce CPU usage.
Eventually, I found the Python file in the Git repo responsible for executing the mpv
command (which ultimately creates and manages the streams). I manually set this to use the lowest quality of the streams coming from the Unifi NVR (640x360), replacing the variable that previously set it to the screen resolution.
I will say that this mostly worked. There was still a couple of seconds of lag, and the framerate remained fairly poor. However, I then noticed that the overall quality was also significantly degraded. While you could somewhat identify people and vehicles in the camera’s view, even reading the registration plate of my car—which is crystal clear in daylight—was almost entirely impossible.
I've tried a whole host of things and I'm going to try to document some of that here and what the findings were to hopefully make your life easier if you're trying to create something similar.
Various Resolutions
I tried a series of resolutions, having read that in theory 1080p should have been fine and the Pi 4 Model B should be able to stream at 4K 60fps I had hoped adjusting the resolutions down to 720p for all four streams I had might have worked, sadly I wasn't able to get anything above the 360p resolution to load without stuttering and poor performance
VLC to the rescue?
I tried switching out the mpv streaming for VLC, which sort of worked but also sort of didn't. It turned out from the fairly short experiment I had using VLC that a lot of the command line arguments mpv has for sizing and positioning for example, just aren't available within VLC which means I could get the streams but they just loaded in a window in the middle of the screen, on top of one another... VLC's performance did seem to be a lot better and it seemed to work a lot more effectively out of the box with no bespoke configuration but without the ability to make it windowless and position it in the same way mpv allows me to position it, I struggle to see it actually working and switched back to testing various mpv configurations in the hopes that might work.
Setting Video Output
After realising VLC is not going to be a viable option without a lot more complexity that I was hoping to avoid, I started to do some experimentation to see what might work and what might not. Within the mpv command there is the option to define what the video output options and format are, I'm not 100% sure on what the default is but I'm fairly sure it defaults to software scaling and software encoding for everything, the Pi 4 does seem to support hardware encoding and decoding (I think from my google searches!) and should if I can get it right, solve a lot of the performance issues I've been seeing. All of these tests were conducted with the --hwdec=auto-unsafe
flag set. This is a table of my findings
Video Output | Result |
---|---|
libmpv | Does not work at all |
gpu | Works but laggy, no performance improvement |
gpu-next | Works but laggy, no performance improvement |
vdpau | Does not work at all |
wlshm | Did not test, listed as software scaling |
xv | Terrible Performance, basically crashed my Pi... Though the quality of the image it froze on for 45 mins was excellent... |
sdl | Terrible Performance, basically crashed my Pi... |
dmabuf-wayland | Mostly worked, seemed to have weird issues loading the first time but same lag and poor framerate as others |
vaapi | Does not work at all |
x11 | Did not test, listed as software scaling |
null | Did not test, I want video output... |
image | Does not work at all |
tct | Does not work at all |
caca | Does not work at all |
drm | Does not work at all |
sixel | Does not work at all |
This ultimately meant of the 5 that did anything, two of them pretty much bricked my Pi and both meant I had to power cycle and try to beat the service from trying to load these streams and none of them provided a better performance over any others. For further testing I've set it to gpu
which seems to be generally recommended from what I've seen and given it was no lesser performance than the 3 that did work viably it seemed sensible for further testing.
Everything else tried...
To try to improve the performance of the stream, I attempted to set --profile=low-latency
as one of the command line flags. In the time it took me to write this much of this section (Maybe 1 Min) some of the camera feeds were already lagging around 30 seconds behind and after another 1-2 Mins two of the camera feeds dropped entirely waiting to re-connect. When the cameras were loading the framerate was often not truly awful but was certainly not the quality I was hoping for.
I did try running just a single camera but it still immediately started to lag behind live and with no clear path to resolve it, I rapidly ran out of good options for using this project.
Anthias
Maybe digital signage tools are the way to go?
After abandoning the original plan I started to look through the options you get on the Pi imaging software in the hopes there might be a gem there that I had missed and found Anthias, an open source project for digital signage that was advertised as being able to stream video files including RTSP so I got this imaged onto the pi and booted it up (And spent longer than I care to admit realising I'd wiped the WiFi settings so needed to re-connect the Ethernet cable again).
I sadly found fairly quickly that this wasn't going to be the solution for me, the first thing I tried was adding one of the streams I had previously configured but got errors. It looks like the project does support RTSP feeds but not on all hardware or all versions. I also found it would have to cycle through the various feeds on a set basis rather than having any customisation on the layout and structure which again was a no from me.
For a fairly basic digital signage solution if you just want to load a public webpage or show an image it does look promising though and was pretty easy to use once it connected to the network.
Ubuntu Instead of Pi OS?
Reviewing the original attempt at RTSP Ubuntu seemed to have positive feedback on the Pi 4, maybe we can see some success here?
I'm going to try to get OpenSurv working on Ubuntu 24.04 LTS in the hopes it's a bit more stable, from what I've seen folks have got this running though some folks did report some performance issues. My intention is to test this far less than my original attempt as to be perfectly honest, I'm getting bored of trying to get to a working solution now!
The performance of the same three streams as before was terrible (Notable worse than prior testing) and locked up the second all three feeds loaded resulting in the pi going unresponsive. I had no desire to waste more time on this approach and decided trying to pull the streams myself was just not the right approach here.
Unifi Protect Electron App
To the rescue?
Through a lot of googling for Unifi Protect on Linux in the hopes I might be able to install the same app that I use on my iPhone and Apple TV already I stumbled across the unifi-protect-viewer Github Project which got me rather excited that I might both have a way to view this on my Pi but also to re-use the dashboards and functionality I get today when I have the Unifi interface open on my desktop.
I started this off by re-installing my Pi back to the latest stable release of Raspberry Pi OS so I had a fully clean slate. In order to test the initial connectivity and performance (and because I didn't want to dig out a keyboard and mouse) I got VNC installed on the Pi to let me interact with the desktop. This was as simple as running raspi-config
going into "Interface Options" and enabling VNC from the SSH session I had established. I had a go at just downloading the build binaries for the project but failed to figure out how to get them to work properly on the pi and ended up going down the path of installing node.js with apt followed by installing npm through apt.
I could not however figure out how to actually access the application from the Pi, I'm fairly sure I'm doing something wrong but to test that the app works how I want it at all I decided to pull down the Windows version and give it a try on my desktop which seemed to work exactly how I wanted it, the only small issue I ran into was it not loading the camera view I wanted but frankly that was the least of my concerns at this stage! I was fairly confident if I could get the app to launch on the Pi I could script it to load on boot and go full screen with minimal issues.
After a lot more staring at my screen than I'd care to admit, I realised there wasn't an ARM64 Linux build being created. I have submitted a PR which has now been merged to the repo to add this natively. With this change now merged you'll be able to build an Arm64 Linux version using npm run build:arm64:linux
which will give you a new directory in your build folder and a binary you can double click to launch the application.
I entered the server URL and a restricted permissions username and password to do some testing and it worked! Until it didn't... Unfortunately I was once again plagued with all of the issues I faced earlier with streams lagging heavily and terrible performance and the Pi saturating it's onboard CPU and the load being higher than was ever practical to sustain. I did experiment with using the lower quality streaming option that the Unifi Protect app lets you use but unfortunately this also was not sufficient. I am sure there are some smarter people than I that know exactly why this is and have read this far in the blog realising I'd get this answer one way or another. This means I've pretty much ruled the Pi out for this project which is rather annoying given that was the entire reason I purchased it in the first place!
The Answer & Next Steps
In a rather unsatisfying end to the blog I haven't got a great deal of recommendations for you on how to proceed other than the recommendation to avoid a Raspberry Pi for this particular task as it seems to be entirely under powered or at the very least the Pi 4 Model B certainly was for me.
My current plan will be to use a spare AppleTV I have to power the spare monitor I have and that I want to use to view the camera feeds, this will allow me to pair the AppleTV over Bluetooth with my sound mixer so I can listen to spotify in my office without my full desktop PC running (and also allow me to watch YouTube / similar if I so desire) but can also have the camera feeds up when I need them up. From previous testing on a different AppleTV at home the Unifi Protect app appeared to work well for this use so I'm a bit more confident in this approach. Should the AppleTV not end up working I may look to re-use a MiniPC currently intended for a HomeLab project to run Linux and the above protect viewer but I'd rather not do this if I can help it!