Chapter 6 — Cameras

This chapter is a diversion from my remote control design to give my brain a chance to cool off a bit with a somewhat simpler task.

I once started working on a book scanner, but got bogged down and never finished. The mechanical bits didn't work as well as I imagined my design would work, and attempting to induce hacked Canon camera microcode to do things it was never designed to do didn't really work either.

Recently, however, several android cameras have been announced. Unlike hacked Canon microcode, android is a system actually designed to be programmed. The camera and touch screen and Wi-Fi are all available for programmers to fiddle with (unlike Eye-Fi SD cards which work one and only one way and if you don't like it, too bad).

So that leads me to my next project: An android camera based book scanner. My idea for this one is to avoid anything more complicated than a mount for the camera and lights, and just take pictures of the open pages of the book and mathematically flatten them. (Lots of info about this and zillions of other book scanning topics can be found on the DIY Book Scanner forums.)

Actually, I've reconsidered the complicated math. My current plan is to use a piece of museum glass to flatten the book against and take a picture one page at a time. I can do all the odd pages then flip the book over and do all the even pages. I can also arrange for the app to number the pages appropriately so the image files will all sort into the correct sequence.

Anyway, there are lots of different ways different people might invent to use the camera, so lots of different options will be good for triggering the camera to take each photo, and you can decide which one works best for your scanning procedures. Some triggers I can think of at the moment are:

  • Tap the screen or use a hardware button on the camera. This is conventional, but you'd probably want to add a time delay to give the vibrations a chance to die down (or even use the acceleration sensors in the android device to detect when vibration stops). Another problem crops up if you are holding open a rather springy book - you might not have a hand available to push the button.
  • Sense the light level. If you have LEDs or floods setup to illuminate the pages, you could leave them off while you are turning the page and bring them up (maybe with a foot switch to leave your hands free), then simply have the android camera realize that things got a lot brighter and go ahead and take the next picture.
  • An even more automatic technique might be a motion sensor. If the camera could detect that your page turning activity has stopped and your hands are not moving around in the field of view any longer, it could just go ahead and take the picture when things get still following a flurry of motion.
  • Once you get handy at page turning, you could probably figure out that it takes the same amount of time to turn every page, so you could also program the camera to just operate on a timer adjusted to your page turning rate. You'd probably want a touchscreen control to make it throw away the last picture when you happened to be too slow, and of course, you need to be able to tell it to stop when you hit the end of the book (that is true for all the other automatic techniques as well).
  • Speech recognition is also a possibility. Might be able to teach the camera app to listen for you to say a word to trigger the next photo (though I could imagine that would get really old after a few hundred pages :-).
  • You'll probably be connected to a host computer (where the pictures are sent), this host computer could probably send commands to the camera as well, so taking a picture could also be triggered by a command from the host (maybe someday, the host will be running the automatic page turning machine :-).

That is a good initial pile of stuff to investigate for a new android project, so it is time to go poking around: Obviously touching the screen, communicating over the network, and running a timer are possible. The more interesting triggers to investigate are light and motion. The Camera class has the android interface for controlling a camera, and Camera.PreviewCallback is how you get to the data for a preview image. The raw format of the preview images is described here, where I see that the first width*height bytes of the image buffer are the Luma plane.

So that all sounds like it should indeed be possible to do the light and motion detection by simply examining the black and white portion of the raw preview data. I just have to figure out how the heck to put all the bits together. The SDK does have a CameraPreview example, so that is a wonderful place to start.

I need to wedge a preview callback into the code. Some of my reading indicated that the camera is going to be clobbering the image buffers out from under me unless I process them really quick, so rather than using the “normal” camera callback, I used the setOneShotPreviewCallback() in the test program. I don't ask for another callback till I have finished examining the image from the last one. All I do with the image is add up the black and white pixels to get an idea of the total light level, and all I do with the light level is send it to the log, so I can only see it working by hooking up adb and running logcat.

As primitive as this is, it does seem to actually work. The code is available in BookScannerProto-2012-09-03-18-10.tar.bz2.

That proves that detecting the light level in the camera field of view is feasible, next up is the question of detecting motion.

To do motion detection, I'll want to compare each preview image with the last one. That means I'll need a copy of the last one, and actually making the copies myself is another high overhead operation I'd rather avoid. Fortunately, it turns out that the Camera class supports yet another callback mechanism which is just what I need. I can call addCallbackBuffer() twice to provide two separate pre-allocated buffers for images, then I can use setPreviewCallbackWithBuffer() to get image callbacks that fill in my preallocated buffers. The nice thing is that the callbacks only happen when there are buffers available, so I can take as long as I want to compare them and hand back the oldest buffer when I am done, keeping the newest buffer around to compare when I get the next callback.

This results in BookScannerProto-2012-09-04-19-46.tar.bz2, which adds up the differences between the pixels in the black and white part of the image and sends the result to the log (just like the light levels in the previous example). I can see the total of differences between the pixels get bigger when I wave my hand around in front of the camera, and drop back down when nothing much is changing.

So, my experiments indicate that all the proposed triggers are feasible. It is time to investigate more things I'll need to do. One thing that would be nice is to get something on the screen other than just the preview image. This means figuring out how the heck to overlay text and wot-not on top of the preview image. There are lots and lots of stackoverflow questions about this, so it seems to be something folks have a lot of trouble with. Some of the examples of how to do such an overlay use the xml layout, but the CameraPreview example builds the preview image in the code, so if I want to use a layout to get things on top of the preview, I need to be able to talk about the Preview class (which is, after all, just a custom ViewGroup) in the xml layout.

The Custom Components android developer page eventually gets around to describing how to reference the components in an xml layout, but it doesn't happen to mention a few minor details. The first one that caused trouble was the fact that it always wants to invoke the two argument form of the constructor, and the existing sample code only defines the one argument constructor. The second problem was that it can't call that constructor unless it is declared public (something that took hours to discover, even though it seems obvious once you find it :-).

While I was doing all this, I also moved the Preview class to a separate source file (I don't know if this is required, but it seems cleaner in any case).

Once I got all this to work, I was indeed able to define some text fields that are displayed on top of the camera preview. I made them green and used a style that gave them a drop shadow to make the text more visible on top of the ever changing preview, then I used them to display the light and motion values so I no longer need to go to the log to see them. I also threw in a attribute value in the layout to make the screen stay on all the time (since I doubt you'll want to stop and turn the screen back on every time it goes off while scanning a big book).

This all resulted in BookScannerProto-2012-09-07-18-39.tar.bz2. This now displays the light and motion values on the screen so you can watch them change without needing logcat.

I guess it is time to start turning this into a real app. I can whip out an icon by cutting and pasting an android icon with the lens taken from a picture of the samsung galaxy camera :-)

I can copy other standard icons from the Android SDK, then add an option menu with a Configure and Exit item (the Exit item even works). That is probably enough for another snapshot: BookScannerProto-2012-09-08-14-49.tar.bz2.

If I'm going to be turning this into a real app, I think I'll change the boring BookScanner name to AndyScan instead (seems to be an available name - no google searches turn up an existing AndyScan app). Next, I need to see about adding configuration dialogs to it and saving the config info in permanent storage. Perhaps this is the sort of thing I need to look at.

I now have the app renamed, and after much struggle, I have a few new items in the option menu and the config item actually does launch a dummy preferences activity: AndyScan-2012-09-08-22-56.tar.bz2, so the next step will be to add real preferences.

The overview for the standard Preference GUI elements was hard to find, because it was called Settings, not Preferences. But I have managed to work out all the preference mysteries (except for the occasional force close that sometimes happens as I exit - I'm sure it is something stupid I'm doing). I've also tried to add the exposure information to the light level measurements, but it doesn't seem to be working.

Here's the snapshot of this latest version: AndyScan-2012-09-09-18-22.tar.bz2.

I did get the mysterious random force close problems to go away by adding null checks to the mCamera references in the preview callback. I realized that I'm setting the camera to null as I'm exiting, yet the preview callback routine might still be running. That seems to have fixed things.

Before actually trying to take a picture, I have now implemented test code that proves I can run a timer on a fixed interval and notice when the screen is touched:

AndyScan-2012-09-13-17-26.tar.bz2

Wow! I've taken a picture, passed the jpg data to an uploader thread, and uploaded it automagically to my desktop's web server using a php uploader script! This is all very fragile preliminary test code, but it demonstrates I can take a picture and get the image to my desktop.

Here is the simple PHP script I run on my fedora 17 box under apache to provide the uploader for the phone to talk to:

/var/www/html/uploader.php

<html>
<head>
<title>Uploaded</title>
</head>
<body>
<table border=0 width=620><tr><td>
<h1>Uploaded</h1>
<?php
// In PHP versions earlier than 4.1.0, $HTTP_POST_FILES should be used instead
// of $_FILES.

error_log(print_r($_FILES,true));
error_log(print_r($_POST,true));
error_log(print_r(headers_list(),true));

if (isset($_FILES['userfile'])) {
$uploaddir = '/zooty/uploads/';
$uploadfile = $uploaddir . basename($_FILES['userfile']['name']);

echo '<pre>';
if (move_uploaded_file($_FILES['userfile']['tmp_name'], $uploadfile)) {
    echo "File is valid, and was successfully uploaded.\n";
} else {
    echo "Possible file upload attack!\n";
}

echo 'Here is some more debugging info:';
print_r($_FILES);

print "</pre>";
}
error_log(print_r(headers_list(),true));
?>

</td></tr></table>
</body>
</html>

Note the use of the string 'userfile' in the $_FILES reference. That ties into the same userfile name in the code in the PictureUploader.java source code where it is building the Content-Disposition: header to send to the web server (at least I think that's how the uploader gets connected to the file name :-).

AndyScan-2012-09-16-21-18.tar.bz2

I only got a 640x480 image, so I need to fiddle with the camera parameters to take higher resolution pictures, and I'm not using any of the preferences to control things yet - I just have all the test stuff hard coded to take picture on a screen touch and use the same image name to upload every time.

A bigger problem is that I can only take one picture. The preview gets frozen if I try to take another one, so I'm probably not doing everything I should to get back into proper preview mode after taking the picture.

But I have demonstrated all the bits are possible, now I just need to get them working together nicely.

I've been trying to up the resolution of the picture, and things are starting to get interesting now. I call takePicture(), but never reach the callback that says the picture was taken. Running logcat show this interesting error message:

E/Camera-JNI(14689): Manually set buffer was too small! Expected 1473253 bytes, but got 768000!

Poking around with google searches turns up this android-developers mail, so it looks like the Camera code is using the preview buffers to try and take a high resolution image. It is no wonder it won't fit.

I'll need to beat on the code some to call setPreviewCallbackWithBuffer() to set the callback to null which should remove the callback buffers (according to the docs, anyway), or if that doesn't work, I'll have to let a couple of preview callbacks happen without adding the buffers back so they'll be gone and not interfere with the camera.

Hmmmm... I tried that, and now the logcat seems to indicate the camera gets through the whole process of taking the picture and converting to jpeg, but my jpeg callback is still not happening. (And after trying many things, I'm still terribly confused :-).

I've been off working on other things, but now I'm back trying to understand why my custom camera app never gets the jpeg callback, and it appears to be some kind of nasty interaction with setPreviewCallbackWithBuffer(). I previously discovered that some versions of android attempted to use the preview buffers for taking the full size photo if you had preview buffers enabled. So I tried to disable them by calling setPreviewCallbackWithBuffer() with a null callback pointer (which is how the docs say you should disable this stuff), but if I do that, I never get the callback with the actual jpeg picture data. When I tried to remove as much stuff as possible from the camera code and produce a simple camera app, taking pictures did not work at all until I completely eradicated all use of setPreviewCallbackWithBuffer(), at which point it seemed to work perfectly (though I haven't actually done anything with the data - I just examine the logcat to see if I get the callback). Now that I finally know what is going on, perhaps I can work out some way to make it happy.

This version of AndyScan with all the preview and special trigger features disabled, but which can finally take and upload a picture in high resolution (triggered by touching screen) is: AndyScan-2012-10-21-12-56.tar.bz2

Now if only I can get my preview analysis code working at the same time as the picture taking code :-).

Did it! I don't know if I needed to be this fanatical or not, but I stopped calling routines inside callbacks, and instead post()ed a Runnable to do everything after completely returning from the callbacks. I also allowed enough preview callbacks to occur to completely drain the buffers, then added a one shot preview callback to notice when normal preview was running again, then, and only then do I post() the call to takePicture(). Once I finally hit the jpeg callback after taking the picture, I do another post operation to reactivate my callback with buffer processing so I can start looking for the next trigger event in the preview images. This version which finally has all the bits working at the same time is in AndyScan-2012-10-22-14-39.tar.bz2

I've now started to implement actual useful features. AndyScan has additional preferences for building the file names for each image and even pays attention to them and it uses the upload URL pref to decide where to upload the images (no more hard coded name of my desktop :-). I've done a little work to clean up the code a bit and improve performance by avoiding extra copies when uploading the images, so it is actually starting to become useful. This latest version is AndyScan-2012-10-28-20-25.tar.bz2

More useful features added: I can write to the sdcard or upload the images (or both). All the preferences involved in this are obeyed now. Also the trigger delay preference is obeyed (uses postDelayed() when posting the Runnable that will call takePicture()). This version is AndyScan-2012-10-30-19-51.tar.bz2.

Another day, another few features tweaked: I added a preference to disable the annoying shutter click sound. That seemed like it ought to be simple enough, but it sent me on a slew of google goose chases till I finally found there is no way to do it through the Camera class. An obscure comment in one place not directly related to taking pictures says that if you pass null to the shutter callback parameter of takePicture() then it won't play the sound, but I tried that, and it turns out not to be true.

Finally, the AudioManager class and the setStreamMute() interface came to the rescue. If I mute the stream STREAM_SYSTEM before calling takePicture(), then un-mute it in the jpeg callback after the picture is taken, then it makes almost no noise (and the noise it does make may actually be something mechanical in the camera — it is hard to say for sure, but it is very quiet).

I added a new preference setting to turn shutter click off and on and this AndyScan source can be found at AndyScan-2012-10-31-20-05.tar.bz2.

Unfortunately, that code only works on some cameras. I should probably give up on this preference and just let people manually turn the volume all the way donw.

I'm now looking at a strange thing: Sometimes I'll see a 0 flash on the screen for motion detection. It seems totally impossible to believe that two consecutive preview images could actually be identical, but I put in a load of Log() calls to show me different data during the motion computation and the system really is sometimes handing me two different preview buffers which contain identical data. My byte count isn't getting set to zero somehow, the buffers are really different objects, I'm just finding the same value in every byte. I can only conclude that there is something about the camera hardware or the low level code running it that sometimes delivers the exact same image twice (which makes me wonder if the video recorder sometimes duplicates frames, but I'm not going to investigate that just now :-). I guess the thing to do is just ignore the motion values that come out as zero on the assumption that they are really some kind of glitch.

...time passes...

After a long enough time to forget everything I did before, I'm back working on making this app functional. The first thing I tried was making it actually use the appropriate camera trigger, which meant that I needed to read the preference value of my list of possible trigger techniques. Since I have to assign names and numbers to each item in the list, it seems totally obvious that the way I'd find out what setting the user selected was by calling getInt() to check the current list value. But no! That's just what they'd expect you to do! After much experimentation, I discovered that I need to call getString() which gives me a number's string representation. Once I finally got the stupid pref value, it was relatively easy to test it, so now I've implemented the timer and the touch triggers. I need some calibration code before I can implement motion and light detection.

I also added some sensor code to detect the camera moving, but I'm not doing anything with it yet. Eventually I'll want an option for the code to delay taking the picture while the camera seems to be moving. The sensors appear to be as odd as the preview buffers. I can watch the value getting written to the screen, and with the camera totally motionless, it will sometime show large delta motion between two samples. I mostly just copied the code from: this sensor tutorial, and maybe the onAccuracyChanged() function needs to get involved here?

 
[Next] [Top] [Prev]  
Page last modified Sat Apr 19 22:06:04 2014