Ambilight Clone

An ambilight clone that works with videos played directly on the TV. No PC or custom media centre.

New Conversation

Join the discussion

Log in or create an account, and start talking!


The Walrus made a post to Ambilight Clone:

LEDs and WiFi

Well, it's been a while. Life got in the way. The good news is, I've actually been doing stuff other than image processing.

The distance between my TV and the Pi that would be looking at its screen is not insignificant. I think running a cable round the edge of the room to get from one to the other would just cause all sorts of issues. So, wireless would be nice.

How do you wirelessly connect a strand of wired LEDs? I figured I could use another Pi (Camera Pi could shout at LED Pi over WiFi and LED Pi could then light things up) but a whole Pi is a bit overkill for something so trivial. I then remembered that the ESP8266 is a thing that exists, and pretty much perfect for this. No real processing grunt to speak of, but has an amount of I/O, is programmable, and is dead cheap from China.

I bought two off ebay (in case I blew one up somehow) and a USB/TTL adapter so I could talk to it.

Whilst the ESP8266 was riding the slow boat from halfway round the world, I started looking into controllable LEDs. It seems there's two main types of chip used to drive these things: WS2812, and WS2801. The difference between the two is how you talk to them. The former uses a one-wire protocol, and the latter an SPI-like interface. I decided on the WS2801 because a) precise timing requirements are just crap I don't want to deal with and b) the ESP8266 has hardware SPI.

So, I bought a bunch of these (also from ebay). I got two chains of 20 (they daisy chain nicely) making 40 controllable LED elements.

There were several options that had just a single LED, but I figure with these things they'll be plenty bright. I can always turn brightness down, it's a bit more difficult to make a single LED brighter - at least whilst keeping the magic smoke inside.

Back to the ESP8266. It's still on the boat, the LEDs are in the post, so I start reading up on how to properly use the thing as I have nothing better to do. It's programmed in C, and there's an SDK floating around.

Then I came across nodemcu, a firmware that basically wraps the SDK in a lua interpreter. Using a scripting language on something as low-power as the ESP8266 seems like it could be a strange idea, but also significantly reduces the amount of effort I need to put in to get code running on it.

Time passes, the LEDs arrive, the ESP8266s arrive.

The ESP8266s i bought are completely bare - tiny little things. It needs some pull up/pull down resistors soldering on, and a button or two to reset it or put it into a mode where it can be programmed. I make a horrible mess soldering all that nonsense on, and then make even more mess because there's various descriptions on the internet of what needs to be connected to what and not all of them are entirely accurate.

Good thing I've got a spare - I'll just accept that I'm making a dogs breakfast of this one and do the next one properly on some stripboard or something.

I finally get it powering up reliably (one pin was floating, which meant it'd sometimes power up and sometimes not - most irritating!) so get the nodemcu firmware flashed on to it. The serial console now gives me a lua prompt and I can interact with the device through code entered directly into the console. Fantastic!

Driving the LEDs via the SPI however, that didn't work. The first few would light up, but not in the colours I instructed them to and at seemingly random intensities. None past the first 8 would light up at all.

At this point, stupid o'clock in the morning and feeling frustrated, I realised I hadn't yet bought the camera module for my Pi - even if these damn things were working I'd not be able to do anything useful with them. So I put in an order for a Pi camera, and went to bed.


Post a Comment!

Join the discussion

Log in or create an account, and start talking!

The Walrus made a post to Ambilight Clone:

Squaring Up

This is a minor follow up to the last post. Without OpenCV being the wonderful box of tricks that it is, I imagine it'd be much longer!

Transforming an off-angle image to another view is really, really, easy. All I needed to do was generate a transformation matrix using the input points (the four corners of the pink area) and the output points (the four corners of an actual rectangle), and to then feed that matrix into a different function to actually perform the transformation. The warp function captures the input area of the source image defined by the matrix, warps it to the output area, and creates a separate image containing the result.

It's literally just this:

# Perspective transform
source_points = np.array([top_left, top_right, btm_left, btm_right], dtype=np.float32)
target_points = np.array([[0, 0], [799, 0], [0, 449], [799, 449]], dtype=np.float32)

transform = cv2.getPerspectiveTransform(source_points, target_points)
demo = cv2.warpPerspective(img, transform, (800, 450))

For testing, I created a simple image with a circle and some dots in a vaguely recognisable pattern so I'd be able to tell if it was giving me a square output. This is the camera's view of it:

Input, and the pink it found. I should probably fix the focus.

And this is what it looks like post-transformation. The circle looks like a circle, and the dots are in the right places. Sorted!

The OpenCV package actually contains a tiny wizard.


Post a Comment!

Join the discussion

Log in or create an account, and start talking!

The Walrus made a post to Ambilight Clone:


Just to recap, the point of this early work is to conclusively identify the corners of the TV screen in a captured image entirely automatically. This is so that the trapezoidal view of the screen can be warped to a straight-on rectangular view which will be used for all the colour-related stuff.

In this post, the part of 'TV Screen' is played by a window containing a pink rectangular image.

So, at this point I have the ability to capture an image from the camera, and black out everything on the image that isn't bright pink:

HSV_MIN = np.array([140, 80, 200], dtype=np.uint8)
HSV_MAX = np.array([160, 255, 255], dtype=np.uint8)

while True:
    read, img =
    if not read:
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    demo = cv2.inRange(hsv, HSV_MIN, HSV_MAX)  # demo = pink only!

Next up, finding the edges of that highlighted area.

OpenCV has a findContours function. In short, a contour is a contiguous line on the image. This finds any contours in the image (ie. the edges of the rectangle making up the TV screen area), and gives me a list of points describing the path the contour takes. Multiple contours can be found, but I'm only interested in the biggest one.

Here's a visual representation, where I've just drawn the points identified by findContours over the captured image:

image, contours, hierarchy = cv2.findContours(demo.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if contours:
    biggest = sorted(contours, key=lambda c: c.size)[-1]
    for point in biggest:, tuple(point[0]), 3, (255, 255, 255))
The dots are all the points along the contour

I'm close to what I want, but there's too many points, because there's slight curves being identified along the rectangle's straight edges. The solution is, of course, more OpenCV magic: approxPolyDP. This function takes a shape, identified as a list of points, and generates an approximation using less points. I can easily use this to simplify the complex contour into four points only.

# arcLength gets the total size of the contour, and uses it to set a minimum for the length of
# any given component of the resulting polygon
# Straight on, the shortest edge is 0.18. Allow smaller size for off-angle capture.
epsilon = 0.10 * cv2.arcLength(biggest, True)
squared = cv2.approxPolyDP(biggest, epsilon, True)

# If there's four points, flatten it and convert to topleft/topright/btmleft/btmright
if len(squared) == 4:
    squared = list(squared.reshape(4, 2))

    # Sort by X. First two are left, last two are right.
    squared.sort(key=lambda e: e[0])
    left, right = squared[:2], squared[2:]

    # Sort each of left/right by height. First is top, last is bottom.
    top_left, btm_left = sorted(left, key=lambda e: e[1])
    top_right, btm_right = sorted(right, key=lambda e: e[1])

    points = np.array([top_left, top_right, btm_right, btm_left, top_left])
    cv2.polylines(img, [points], False, (255, 255, 255))

Once found, the points are classified as Top Left, Top Right, Bottom Left, and Bottom Right, based on their X/Y values. As if by magic, I can now identify an off-anlge rectangular pink area in a captured frame, find its four corners, and use those to exactly outline the area.

This is totally a TV screen


zoe commented on Cornering:



Join the discussion

Log in or create an account, and start talking!

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

The Walrus made a post to Ambilight Clone:

Going Pink

Before I can get in to the fun stuff like finding colours and working out how to light stuff up, I've got to deal with what the TV screen looks like to the camera. There's a couple of issues.

The camera is not going to be perfectly positioned to exactly capture the image on-screen and only the image on screen, so I need to define the area that is relevant. The camera is also not going to be positioned directly in front of the centre of the screen. This means that the off-angle view I capture, which may look something like this:

What the camera thinks the screen looks like

Will need to be changed into what I would see if the camera was positioned perfectly, which is more like:

What I need the screen to look like

OpenCV has something in its toolkit for warping images (or parts of images), so actually changing the trapezoidal view to a rectangular view is not going to be particularly difficult. I will need to identify the four corners of the trapezoid to feed in to the warp, though.

I figure there's two ways to do this:

  1. Create an application that can capture a frame from the camera, let me click on four locations, and then save those as the input coordinates.
  2. Identify the corner coordinates of the TV screen automagically, by using a pattern or colour not likely to be found elsewhere in a captured image.

I decided to go for number 2, because I'm lazy and it's going to need to re-identify the coordinates every time anything moves. And user interfaces are annoying.

Something you don't see every day

Magenta. I'll make the whole screen magenta, and then find it in the captured image. The bright pink trapezoid will be the screen, and there's literally 0% bright pink anywhere in my home so there'll be nothing for the calibration routine to get confused over.

So, what's a sensible way to find specific colours in an image? RGB would kinda-sorta work, but it's difficult to specify a range and lighting differences trip it up. The right answer is to use HSV!

There's a useful block of code on stackoverflow, which I'll include here in case it disappears:

import cv2
import numpy as np

cap = cv2.VideoCapture(0)

def nothing(x):
# Creating a window for later use

# Starting with 100's to prevent error while masking
h,s,v = 100,100,100

# Creating track bar
cv2.createTrackbar('h', 'result',0,179,nothing)
cv2.createTrackbar('s', 'result',0,255,nothing)
cv2.createTrackbar('v', 'result',0,255,nothing)


    _, frame =

    #converting to HSV
    hsv = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)

    # get info from track bar and appy to result
    h = cv2.getTrackbarPos('h','result')
    s = cv2.getTrackbarPos('s','result')
    v = cv2.getTrackbarPos('v','result')

    # Normal masking algorithm
    lower_blue = np.array([h,s,v])
    upper_blue = np.array([180,255,255])

    mask = cv2.inRange(hsv,lower_blue, upper_blue)

    result = cv2.bitwise_and(frame,frame,mask = mask)


    k = cv2.waitKey(5) & 0xFF
    if k == 27:



This basically defines an upper limit of maximum HSV, and gives you three sliders to fiddle with to define the lower end of the range. Anything not in the range is blacked out.

It's convenient that magenta is fairly close to the end of the H scale, else I'd have to adjust the 'upper' limit in the code to a more sensible value. Here's the progression, showing what happens at various levels of filtering up to the goal of showing magenta only:

Cutting out some of the colours
Past this point, the pink goes to black
Filter out unsaturated things. Grey is basically pink with zero saturation
And knock out everything that's not particularly bright, leaving only the rectangle

To give myself a bit of range to account for lighting differences, I'll define the minimum HSV as (140, 80, 200) and the maximum HSV as (160, 255, 255). Anything not within this range goes to black. All I'll need to do is make my  TV screen magenta and it'll be the only thing the camera can see.

The job of finding the edges of the region is a bit more interesting, but that will be in the next post!


Post a Comment!

Join the discussion

Log in or create an account, and start talking!

The Walrus made a post to Ambilight Clone:

Baby's First Image Capture

It's been a while. I didn't mean to abandon this for quite so long, but I've not had much free time to arse about with toy projects. Really, I still don't. This is going to be done in a lot of pretty small steps.

There's two stages to this project:

  1. Capturing an image from the screen and picking out colours.
  2. Actually lighting up a bunch of LEDs with those colours.

I'm only going to worry about #1 for now, because I can do that with the crap I have lying around and it's not going to cost me anything except time. I've decided on the following toolkit for this:

  • OpenCV - Big box 'o magic for image processing.
  • Python - Because it's clearly superior to everything else.
  • PS2 EyeToy USB camera - It's a camera. It's a bit naff but it's one I have lying around, and I'm too cheap to buy anything for something that might not work.

I also have a Raspberry Pi B+ on a shelf somewhere (this is probably true for 60% of Pi's ever bought...) which may be useful in the future, but it's going to stay on the shelf for now. Developing on my desktop is going to be way more comfortable.

As it turns out, the EyeToy was a bloody mistake. I have four different drivers for two different possible camera models sitting in my downloads directory. Only one of them works, and only if it's plugged into one specific sodding USB port. OpenCV setup was much easier. I downloaded the pre-compiled binaries (cv2.pyd and a crapload of DLLs), dumped them in site-packages, and I'm good. The code for image capture itself is dead simple to boot:

import cv2

cam = cv2.VideoCapture(0)   # 0 -> index of camera
while True:
    read, img =
    if not read:

    cv2.namedWindow("cam-test", cv2.WINDOW_AUTOSIZE)
    cv2.imshow("cam-test", img)
    cv2.waitKey(20)  # wait a bit for the image to be displayed

Yeah. That's it. For your consideration, a hall-of-mirrors effect of this post being written:

I wasted a lot of time breathing life into the piece of crap EyeToy, so all I've got to show for it is some proof that it is actually capable of collecting light.

Next up: cropping the collected pictures to the TV screen so I can pick colours out without also grabbing the wall, speakers, or any other assorted junk.


i have this pi board. does it powerful enough to do this ambilight project
Im ambilight fanatic
I created my own ambilight using stm32f4 chip and feed it with CVBS signal.
just want to ask, can I use the pi board to detect the screen. straighten it and then export to cvbs with low reslution to the stm32f4 and let them do the rest of works
thanks alot

zoe replied:

this is my pi board (not yet, gonna buy it if it can do these incredible things you wrote


Join the discussion

Log in or create an account, and start talking!

Post a Comment!

Join the discussion

Log in or create an account, and start talking!

The Walrus made a post to Ambilight Clone:

Project Plan

I like the Ambilight effect. Glowing lights around the back of the TV, lighting up the wall and room with whatever is happening on-screen. In all the demo videos I've seen, it looks awesome.

There's a lot of results if you search the internet for 'Ambilight Clone'. A lot of them look pretty good. Not entirely on the same level as Ambilight itself, but would be good enough for me.

There's an issue though. To get the video input to work out how to light the LEDs, every single one of them relies on either splitting a HDMI or analogue signal or by actually requiring that the video is played on the device running the video analysis software.

This seems a bit crap to me, but it could be that it's the only actually reasonable way of doing it. So, I'm going to find out. This is what I'm going to try to do:

Highly Technical Diagram

The camera captures a video of what's playing on the TV, the PC (or maybe a Pi?) works out what colours need to be lit where, and then sets a bunch of LEDs to do it.

Seems fairly simple.

I'm just going to assume it'll work fine and ignore the voice in the back of my head telling me "If this was so simple and worked well, everyone would be doing it. You idiot."


This post has been deleted.


Join the discussion

Log in or create an account, and start talking!

Post a Comment!

Join the discussion

Log in or create an account, and start talking!