DIY motorized blinds for $40

I have some 2” wooden blinds in my house that I’ve been wanting to motorize. Why? I’m lazy and I thought it would be cool to have.

The best commercial solution for retrofitting existing blinds seems to be Somfy. They have wireless battery-powered systems and fancy-looking remotes. For new motorized blinds, Bali seems to be popular, and they use Somfy for the motorization. There are also some kickstarter things (MOVE, MySmartBlinds), but the last time I looked those didn’t really do what I want. Somfy likely has a good product, but it’s very expensive. It looks like it would cost about $150 per blind, which is just way too much for me. They want $30 just for the plastic wand that holds the batteries (8 x AA). We’re talking about a motor and a wireless controller to tell it what to do. It’s not rocket surgery, so why should it cost $150?

My requirements are:

  • Ability to tilt the blinds to one of three positions (up, middle, down) remotely via some wireless interface. I don’t care about raising or lowering the entire blind.
  • There must be some API for the wireless interface such that I can automate them myself (close at night, open in morning)
  • Tilt multiple blinds at the same time so they look coordinated.
  • Be power efficient – one set of batteries should last more than a year.

Somfy satisfies this if I also buy their “Universal RTS Interface” for $233, but that only makes their solution even more expensive. For the 6 blinds I wanted to motorize, it would cost about $1200. No way.

I’ve been meaning to get into microcontrollers for a while now, and I thought this would be the perfect project for me to start. About a year ago I bought a RedBear BLE Nano to play with some Bluetooth stuff, so I started with that. I got a hobby servo and a bunch of other junk (resistors, capacitors, etc) from Sparkfun and began flailing around while I had some time off around Christmas. The Arduino environment on the BLE Nano is a little weird, but I got things cobbled together relatively quickly. The servo was very noisy, and it’s difficult to control the speed, but it worked. Because I wanted to control multiple devices at once, BLE was not a really great option (since AFAIK there is no way to ‘broadcast’ stuff in a way that is power-efficient for the listeners), and I started looking at other options. Eventually I ran across the Moteino.

The Moteino is an Arduino clone paired with a RFM69W wireless radio, operating at either 915Mhz or 433Mhz. It also has a very efficient voltage regulator, making it suitable for battery powered applications. The creator of the board (Felix Rusu) has put in a lot of work to create libraries for the Moteino to make it useful in exactly my type of application, so I gave it a try. The RFM69 library is lovely to work with, and I was sending messages between my two Moteinos in no time. The idea is to have one Moteino connected via USB to a Linux box (I already have a BeagleBone Black) as a base station which will relay commands to the remote devices. I got my servo working again with the Moteino quickly, as most of the code Just Worked.

I started out with a hobby servo because I knew it would be easy to control, but the noise and lack of speed control really bothered me. I needed to try something else. I considered higher quality servos, a gear motor with encoder or limit switches, stepper motors, worm gear motors, etc. I was going to end up building 6 of these things to start with, so cost was definitely a big factor. I ended up settling on the 28BYJ-48 stepper motor because it is extremely cheap (about $2), relatively quiet, and let me control the speed of rotation very precisely. There is a great Arduino library for stepper motors, AccelStepper, which lets you configure acceleration/deceleration, maximum speed, etc. It also has an easy-to-use API for positioning the motor. I found a 5mm x 8mm aluminum motor coupling to connect the motor to the blinds shaft. I then used a zip tie and a piece of rubber to secure the motor to the blinds rail. This doesn’t look very professional, but it’s not something you really see (my blinds have a valance that covers the rail). A better solution involving some kind of bracket would be great, but would increase the cost and require a lot more time. Using the stepper, I was able to smoothly, quietly, and cost-effectively control the blinds.

I then started to look into power consumption. If you don’t do put anything to sleep, the power usage is pretty high. The LowPower library from Felix makes it easy to put the CPU to sleep, which helps a lot. When sleeping, the CPU uses very little power (about 3µA I think), and the radio will wake you up via interrupt if a message arrives. The radio uses roughly 17mA in receive mode, however, so that means we’d only get about a week of battery life if we used a set of high-quality AAs (3000mAh / 17mA = 176h). We need to do a lot better.

The RFM69 has a useful feature called Listen Mode that some folks on the LowPowerLabs forums have figured out how to use. In this mode, you can configure the radio to cycle between sleeping and receiving in order to reduce power consumption. There are a lot of options here, but it was discovered that you only need to be in the RX phase for 256µS in order for a message to be detected. When the radio is asleep it uses about 4µA. So if you sleep for 1s and receive for 256µS, that means your average power consumption for the radio is about 12µA. This is a dramatic improvement, and it means that the device can still respond in roughly one second, which is certainly adequate for my application. Of course, you can always trade even more responsiveness for power efficiency, and people using this method on coin cell batteries certainly do that. There is one user on the forums who has an application with an expected battery life of over 100 years on a single coin cell! I have an extension of the RFM69 library, RFM69_WL, which collected some of the other listen mode code that was floating around and extends it so you can set your own sleep/RX durations.

I’ve measured/calculated my average power consumption to be about 46µA if I run the motor for 12s per day. That comes out to over 7 years of life on a set of 4 AA batteries (Energizer Ultimate Lithium), which is an almost unbelievable number. There are several factors I am not really considering, however, such as RF noise (which wakes the radio causing increased power consumption), so the real life performance might not be very close to this. Still, if I can get 2 years on a set of 4 AAs I’ll be pretty happy.

Usually when you buy a 28BYJ-48 it will include a driver board that has a ULN2003A, some connectors, and a set of LEDs for showing which phase of the stepper is active. This is fine for testing and development, but it was going to be pretty clunky to use this in a final solution. It was time to design my first PCB!

I found out early on that it was pretty difficult to talk to other people about problems with your project without having a schematic, so I made one of those. I started with Fritzing, but moved to EAGLE when it was time to do the PCB. It seemed to be the standard thing to use, and it was free. EAGLE has a pretty steep learning curve, but some tutorials from Sparkfun helped a lot. I also got some help from the folks on the LowPowerLabs forums (TomWS, perky), who I suspect do this kind of thing for a living. You can get the EAGLE schematic and board design here.

View post on imgur.com
View post on imgur.com

I ordered my first batch of boards from Seeed Studio, as well as a bunch of supporting components from Digikey. The PCB orders typically take a little over two weeks, which is quite a bit more waiting than I’m accustomed to. I was pretty excited when they arrived, and started checking things out. I soon realized I had made a mistake. The component I had in my design for the motor connector (which is a JST-XH 5-pin) was the wrong pitch and size, so the socket I had didn’t fit. Whoops. The hardware world does not play well with my “just try some stuff” mentality from working with software. I found an EAGLE library for the JST-XH connectors, used the correct part, and ordered another batch of PCBs. This time I actually printed out my board on paper to make sure everything matched up. I had run across PCBShopper while waiting for my first batch of boards, so I decided to use a different fabricator this time. I chose Maker Studio for the second order, since I could pay about the same amount and get red boards instead of green. Another two weeks went by, and finally last week I received the boards. I assembled one last weekend using my fancy (and cheap!) new soldering station. It didn’t work! Shit! The Moteino was working fine, but the motor wasn’t moving. Something with the motor driver or connection was hosed. After probing around for a pretty long time, I finally figured out that the socket was installed backwards. It seems the pins in the EAGLE part I found were reversed. Ugh. With a lot of hassle, I was able to unsolder the connector from the board and reverse it. The silkscreen outline doesn’t match up, but whatever. It works now, which was a big relief.

View post on imgur.com

I thought about putting the board in some kind of plastic enclosure, but it was hard to find anything small enough to fit inside the rail while also being tall enough to accomodate the Moteino on headers. I’m planning to just use some extra-wide heat shrink to protect the whole thing instead, but haven’t done that yet.

Below are some photos and videos, as well as the entire list of parts I’ve used and their prices. Each device costs about $40, which is a pretty big improvement over the commercial options (except maybe the kickstarter stuff). Also important is that there are no wires or electronics visible, which was critical for the Wife Acceptance Factor (and my own, honestly).

I’m sure a real EE will look at this and think “pfft, amateur!”. And that’s fine. My goals were to learn and have fun, and they were definitely accomplished. If I also produced something usable, that’s a bonus.

View post on imgur.com
View post on imgur.com
View post on imgur.com

Bill of Materials

28BYJ-48-12V$2.08
DIP socket$0.19
Motor Coupling, 5mm x 8mm$1.26
Electrolytic Capacitor, 100µF$0.30
Ceramic Capacitor, 0.1µF (2)$0.24
DC Barrel Jack, PJ-002B$0.93
DC Barrel Jack Plug, PP3-002B$1.36
2M Resistor$0.04
2.7M Resistor$0.06
Motor Plug Socket$0.21
Tactile Button$0.10
PCB$0.99
Moteino$22.95
AA Holder$1.24
Energizer Ultimate AA (4)$6.00
Total$37.95

In order to talk to the devices from a host computer, you’ll also need a Moteino USB ($27). To program the non-USB Moteinos you’ll need a FTDI adapter. LowPowerLabs sells one of those for $15, but you may be able to find a better deal elsewhere.


MP4 improvements in Firefox for Android

One of the things that has always been a bit of a struggle in Firefox for Android is getting reliable video decoding for H264. For a couple of years, we’ve been shipping an implementation that went through great heroics in order to use libstagefright directly. While it does work fine in many cases, we consistently get reports of videos not playing, not displayed correctly, or just crashing.

In Android 4.1, Google added the MediaCodec class to the SDK. This provides a blessed interface to the underlying libstagefright API, so presumably it will be far more reliable. This summer, my intern Martin McDonough worked on adding a decoding backend in Firefox for Android that uses this class. I expected him to be able to get something that sort of worked by the end of the internship, but he totally shocked me by having video on the screen inside of two weeks. This included some time spent modifying our JNI bindings generator to work against the Android SDK. You can view Martin’s intern presentation on Air Mozilla.

While the API for MediaCodec seems relatively straightforward, there are several details you need to get right or the whole thing falls apart. Martin constantly ran into problems where it would throw IllegalStateException for seemingly no valid reason. There was no error message or other explanation in the exception. This made development pretty frustrating, but he fought through it. It looks like Google has improved both the documentation and the error handling in the API as of Lollipop, so that’s good to see.

As Martin wrapped up his internship he was working on handling the video frames as output by the decoder. Ideally you would get some kind of sane YUV variation, but this often is not the case. Qualcomm devices frequently output in their own proprietary format, OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka. You’ll notice this doesn’t even appear in the list of possibilities according to MediaCodecInfo.CodecCapabilities. It does, however, appear in the OMX headers, along with a handful of other proprietary formats. Great, so Android has this mostly-nice class to decode video, but you can’t do anything with the output? Yeah. Kinda. It turns out we actually have code to handle this format for B2G, because we run on QC hardware there, so this specific case had a possible solution. But maybe there is a better way?

I know from my work on supporting Flash on Android that we use a SurfaceTexture there to render video layers from the plugin. It worked really well most of the time. We can use that with MediaCodec too. With this output path we don’t ever see the raw data; it goes straight into the Surface attached to the SurfaceTexture. You can then composite it with OpenGL and the crazy format conversions are done by the GPU. Pretty nice! I think handling all the different YUV conversions would’ve been a huge source of pain, so I was happy to eliminate that entire class of bugs. I imagine the GPU conversions are probably faster, too.

There is one problem with this. Sometimes we need to do something with the video other than composite it onto the screen with OpenGL. One common usage is to draw the video into a canvas (either 2D or WebGL). Now we have a problem, because the only way to get stuff out of the SurfaceTexture (and the attached Surface) is to draw it with OpenGL. Initially, my plan to handle this was to ask the compositor to draw this single SurfaceTexture separately into a temporary FBO, read it back, and give me those bits. It worked, but boy was it ugly. There has to be a better way, right? There is, but it’s still not great. SurfaceTexture, as of Jelly Bean, allows you to attach and detach a GL context. Once attached, the updateTexImage() call updates whatever texture you attached. Detaching frees that texture, and makes the SurfaceTexture able to be attached to another texture (or GL context). My idea was to only attach the compositor to the SurfaceTexture while it was drawing it, and detach after. This would leave the SurfaceTexture able to be consumed by another GL context/texture. For doing the readback, we just attach to a context created specifically for this purpose on the main thread, blit the texture to a FBO, read the pixels, detach. Performance is not great, as glReadPixels() always seems to be slow on mobile GPUs, but it works. And it doesn’t involve IPC to the compositor. I had to resort to a little hack to make some of this work well, though. Right now there is no way to create a SurfaceTexture in an initially detached state. You must always pass a texture in the constructor, so I pass 0 and then immediately call detachFromGLContext(). Pretty crappy, but it should be relatively safe. I filed an Android bug to request a no-arg constructor for SurfaceTexture more than two years ago, but nothing has happened. I’m not sure why Google even allows people to file stuff, honestly.

tl;dr: Video decoding should be much better in Firefox for Android as of today’s Nightly if you are on Jelly Bean or higher. Please give it a try, especially if you’ve had problems in the past. Also, file bugs if you have issues!


Flash on Android 4.4 KitKat

There has been some some talk recently about the Flash situation on Android 4.4. While it’s no secret that Adobe discontinued support for Flash on Android a while back, there are still a lot of folks using it on a daily basis. The Firefox for Android team consistently gets feedback about it, so it didn’t take long to find out that things were amiss on KitKat.

I looked into the problem a few weeks ago in bug 935676, and found that some reserved functions were made virtual, breaking binary compatibility. I initially wanted to find a workaround that involved injecting the missing symbols, but that seems to be a bit of a dead end. I ended up making things work by unpacking the Flash APK with apktool, and modifying libflashplayer.so with a hex editor to replace the references to the missing symbols with something else. The functions in question aren’t actually being called, so changing them to anything that exists works (I think I used pipe). It was necessary to pad each field with null characters to keep the size of the symbol table unchanged. After that I just repacked with apktool, installed, and everything seemed to work.

There is apparently an APK floating around that makes Flash work in other browsers on KitKat, but not Firefox. The above solution should allow someone to make an APK that works everywhere, so give it a shot if you are so inclined. I am not going to publish my own APK because of reasons.


Using direct textures on Android

I’ve been working at Mozilla on Firefox Mobile for a few months now. One of the goals of the new native UI is to have liquid smooth scrolling and panning at all times. Unsurprisingly, we do this by drawing into an OpenGL texture and moving it around on the screen. This is pretty fast until you run out of content in the texture and need to update it. Gecko runs in a separate thread and can draw to a buffer there without blocking us, but uploading that data into the texture is where problems arise. Right now we use just one very large texture (usually 2048x2048), and glTexSubImage2D can take anywhere from 25ms to 60ms. Given that our target is 60fps, we have about 16ms to draw a frame. This means we’re guaranteed to miss at least one frame every time we upload, but likely more than that. What we need is a way of uploading texture data asynchronously (and preferably quicker). This is where direct textures can help.

If you haven’t read Dianne Hackborn’s recent posts on the Android graphics stack, you’re missing out (part 1, part 2). The window compositing system she describes (called SurfaceFlinger) is particularly interesting because it is close to the problem we have in Firefox. One of the pieces Android uses to to draw windows is the gralloc module. As you may have guessed, gralloc is short for ‘graphics alloc’. You can see the short and simple API for it here. Android has a wrapper class that encapsulates access to this called GraphicBuffer. It has an even nicer API, found here. Usage is very straightforward. Simply create the GraphicBuffer with whatever size and pixel format you need, lock it, write your bits, and unlock. One of the major wins here is that you can use the GraphicBuffer instance from any thread. So not only does this reduce a copy of your image, but it also means you can upload it without blocking the rendering loop!

To get it on the screen using OpenGL, you can create an EGLImageKHR from the GraphicBuffer and bind it to a texture:

#define EGL_NATIVE_BUFFER_ANDROID 0x3140
#define EGL_IMAGE_PRESERVED_KHR   0x30D2

GraphicBuffer* buffer = new GraphicBuffer(1024, 1024, PIXEL_FORMAT_RGB_565,
                                          GraphicBuffer::USAGE_SW_WRITE_OFTEN |
                                          GraphicBuffer::USAGE_HW_TEXTURE);

unsigned char* bits = NULL;
buffer->lock(GraphicBuffer::USAGE_SW_WRITE_OFTEN, (void**)&bits);

// Write bitmap data into 'bits' here

buffer->unlock();

// Create the EGLImageKHR from the native buffer
EGLint eglImgAttrs[] = { EGL_IMAGE_PRESERVED_KHR, EGL_TRUE, EGL_NONE, EGL_NONE };
EGLImageKHR img = eglCreateImageKHR(eglGetDisplay(EGL_DEFAULT_DISPLAY), EGL_NO_CONTEXT,
                                    EGL_NATIVE_BUFFER_ANDROID,
                                    (EGLClientBuffer)buffer->getNativeBuffer(),
                                    eglImgAttrs);

// Create GL texture, bind to GL_TEXTURE_2D, etc.

// Attach the EGLImage to whatever texture is bound to GL_TEXTURE_2D
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, img);

The resulting texture can be used as a regular one, with one caveat. Whenever you manipulate pixel data, the changes will be reflected on the screen immediately after unlock. You probably want to double buffer in order to avoid problems here.

If you’ve ever used the Android NDK, it won’t be surprising that GraphicBuffer (or anything similar) doesn’t exist there. In order to use any of this in your app you’ll need to resort to dlopen hacks. It’s a pretty depressing situation. Google uses this all over the OS, but doesn’t seem to think that apps need a high performance API. But wait, it gets worse. Even after jumping through these hoops, some gralloc drivers don’t allow regular apps to play ball. So far, testing indicates that this is the case on Adreno and Mali GPUs. Thankfully, PowerVR and Tegra allow it, which covers a fair number of devices.

With any luck, I’ll land the patches that use this in Firefox Mobile today. The result should be a much smoother panning and zooming experience on devices where gralloc is allowed to work.


Android = Linux-on-anything-with-a-screen

Here we are in 2011. Obviously, the year of the Linux desktop, right? You’d hope so. It’s not like there hasn’t been plenty of time to make it happen. Red Hat, Canonical, Novell, and many other companies and individuals have been working hard on it for quite a while. And yet, very few inroads have been made. Sure, Dell has been selling Ubuntu on some netbooks with limited success, but that is hardly a victory.

In 2007, Google unveiled the Linux-based Android smartphone platform. The first phone wasn’t released until a year later in 2008, along with the Android source code. Now, in Q4 2010, Android is the best-selling smartphone operating system in the world, surpassing Symbian, BlackBerry, Windows Mobile, and yes even iOS. In 2 years, it went from zero to total domination, with no signs of slowing down. Not everyone agrees that Android is the best smartphone OS (ahem, Gruber), but it’s impossible to say that it isn’t a success. And it uses Linux. In fact, I believe you could say that Android is the most successful and widely deployed Linux product ever. So why haven’t we standardized on it for other non-smartphone uses? It would require some work, but at least there is some chance it could work out.

Initially I was going to try to call out all of the infuriating things about developing on desktop Linux, but I got tired. All of that has been said elsewhere by people smarter than me. The main reason I’m really behind Android is that it dramatically reduces the amount of fragmentation. It’s just not possible for a hardware vendor like AMD or NVIDIA to support the multitude of distros out there. If you told those guys they only had to support a couple versions of Android, they would be able to deliver much higher quality of support.

The Eclipse IDE, Dalvik VM, and Android application framework are all great assets and deliver a far superior development experience over standard Linux. You get that for free. If Java isn’t your bag, though, you can always use the Native Development Kit to write in C, C++, C#, or whatever you want. I would love to see Clutter on Android, for instance. And Node.js/gjs. Or any number of other great projects.

It’s already possible to run Android on a PC via the Android-x86 project. They’ve added basic mouse support, as well as hardware accelerated OpenGL ES2 via Mesa (on Intel hardware). I have a Cr-48 running it here, and it works amazingly well. Even wifi works.