Measuring Camera Stability - A Quantitative Approach

2»

Comments

  • edited September 2011
    This way of appreciation of the rig movements on image analysis is really a new and important step ahead. Thanks Tom.

    When looking at the video of test 2, the sequence from 0.50 to 1.50 is where the rig is in the normal flight. The main moves that I noticed are the rapid and small jolts, a longitudinal shift/rotation, a pan rotation and a transverse swaying. Longitudinal means in the line direction. So, because the camera is aiming at 90
  • edited September 2011
    Very interesting stuff!! Just thinking out loud (actually... with my fingers on the keyboard)... There are so many complex and inter-related variables that influence camera movement once it's airborne.

    Some of the major variables are:
    Rig characteristics
    Wind characteristics
    Line characteristics and tension
    Kite characteristics
    etc.

    Using video analysis software is a great idea to practically measure movement and correlate it with what the camera is "seeing". Using gyro and accelerometer sensor data is also very helpful for understanding higher frequency movements. There are so many possibilities with this. Ultimately the variables listed above (and many more) combine together to influence camera movement, but it seems very difficult to determine the role each one plays when trying to measure their effects all at once in the air. I guess it comes down to knowing what data is important and how you can use it to affect your system design.

    I think it would be useful if you could separate or control some of these variables to help understand how each influences the camera movement. One potential approach would be to focus on the rig itself and how it responds to input movement at the point that it attaches to the line. Creating a simplified test environment in the
  • During KapiFrance we had such a testing device.
    It needs a strong rubber to get the line tension, or to have a pulley at the top and hang a load.
    It is very helpfull to test the damping devices, so I guess it will show some interesting results. It may be more difficult to have the true rig characteristics because the action of the wind will be missing and also it is not easy to set a long length of line.
  • Christian and Mike, thanks for your input.

    Christian, the point you made about the pull on the line is interesting. Video Deshaker also saves information about how much the image appears to zoom. If a KAP camera is pointed down and the tension on the line dropped, that may provide a way to measure a change in altitude. I would have to go through the math and do some tests along the lines of what Mike was talking about to see if I could take the Deshaker zoom factor data and convert it to a more useful number like (delta h / h) or preferably something more concrete like an actual altitude in meters.

    I also like the idea of intentionally aiming the camera so it is not aligned with any of the real world coordinates, but you're right, converting camera coordinates back to real world coordinates would be difficult at best, and might not be possible at all. Another possibility would be to run multiple cameras, but that moves this style of testing out of the realm of what is possible to everyone who's doing KAP and into a realm of having to build instrumented rigs. I'd rather avoid that in the long run, though I'd like to build a multi-camera rig in the short run just to verify that the technique works.

    I think your analysis of what's going on with my rig is correct. I did two other flights yesterday, though I haven't had time to run the numbers on them. (I'm processing the second one as I type.) There is some very real coupling between axes on some of the moves the rig does. At first I thought it was strictly because of parallax, but it became obvious yesterday that it's because kite line hangs at an angle, and that the rig will swing around that tilted axis, causing motion in at least two axes any time the rig swings side-to-side.

    On the setup of my Picavet, I tend to fly with higher angle kites these days. I set the upwind leg of my Picavet to be quite short with respect to the downwind leg. I think this leads to more higher frequency oscillations than if I set it up as an equilateral triangle, but it results in smaller motion in the transverse axis. Well spotted!

    As for a lab, I have the perfect spot picked out:

    CFHT

    If I could just get that big camera rig out of the way, it would be perfect! ;)

    Tom

    P.S. I was, of course, joking about the dome, but there's a good chance I can use some space in our headquarters machine shop. Getting 30m of line would not be easy, but it would be possible.
  • Add a large fan and a way to modulate the line based on actual flight recordings and you would have the perfect indoor test facility. Benedict Kite Labs, love it!!!
  • Hey Tom,
    You cant use that camera rig, it looks over exposed to me!
    :-)
  • Aaaah, you got me, James! I've tried doing HDR there, and I still get blasted on that one hot spot. Too shiny, and the lights are too bright.

    Mike, we're looking at putting large vents in the dome structure, so we may even get ambient wind to play with indoors!

    On the topic of the tests, I had made a pretty bad mistake with how I was handling my statistics. It didn't affect the FFTs, but it did affect what I was calling average velocities. So from the really awful flight I posted about earlier, these are the statistics:

    Mean Position
    X -7.815267909 +/- 6.742722057 deg
    Y 4.537003337 +/- 4.683914856 deg
    Z -2.495124266 +/- 4.229809899 deg

    Mean Absolute Velocity
    X 10.29278096 +/- 7.285112514 deg/sec
    Y 7.592313086 +/- 5.973050159 deg/sec
    Z 15.61209961 +/- 12.17573388 deg/sec

    All of which points toward a messy flight. Gotta love error bars that large!

    I had a chance to fly the same rig with, unfortunately, a different kite at Hapuna Beach yesterday afternoon:

    Mean Position
    X 2.865741463 +/- 0.958756363 deg
    Y 0.010977206 +/- 0.380036833 deg
    Z -2.106761252 +/- 1.15337986 deg

    Mean Absolute Velocity
    X 0.840284495 +/- 0.966043071 deg/sec
    Y 0.711303095 +/- 1.084373944 deg/sec
    Z 2.612226563 +/- 2.409169773 deg/sec

    Which indicates something we all already knew: wind obviously plays a big role in the performance of a KAP rig! In both cases the camera was pointed 90 degrees to the right of downwind, or the transverse direction.

    I don't have the graphs from this one posted yet, nor do I have the video itself posted, but that small feature just above 5Hz is still there, and there are strong peaks below 1Hz. The overall shape is different, but it looks like a couple of the peaks coincide with peaks from the session with the nastier wind. I'll know more when I get some time to go over it. I did a second flight that I haven't processed yet. In that one I used Christian's method and did two minutes of video in the longitudinal, then in the transverse, and finally in the vertical orientation. That should give me plenty of video to compare the rates and see if they translate, to do a sliding boxcar FFT sampling to see if I get consistent results, and basically answer most of the remaining questions I have about the method. More work required. But it's a start.

    On another topic I'm about ready to cut metal on the new suspension I'm developing this method to test. Once it's ready I'll test a Picavet and this new one back-to-back and compare to see how it fares under similar conditions.

    Tom
  • edited November 2011
    Simon and Tom's earlier comments in this thread about using an iPod app to measure rig movement got me searching for an app that also measured gyro data. I found a killer applicaton for my iPod Touch that lets you log both gyro and acceleration data. The app is called "Sensor Data" and provides data logging at up to 100 Hz to .csv file format. I had a chance to use the this app with the iPod to evaluate different damping settings of the viscous damper test rig and got some nice initial results.

    A more detailed discussion about using this app to log data from a kite flight can be found in a related thread that is occuring in parallel with this one (see 11/5/11 entry).
  • After reading Mike's post in the other thread, I'm downloading a copy of the app. It's not cheap, but from what I saw of Mike's data it's going to be seriously handy. If nothing else it gives me a way to verify that the numbers I'm getting off my video are real. But I think I'm going to make a clip for my iPhone to catch a ride on my test rig so it can act as a data logger.

    Seriously good find, Mike!

    Tom
  • This is probably one of the most interesting discussion i've seen so far. It took me a long time to read it. It will probably take me a few days to digest all this data.

    Here are a few comments/observations from my side. Taking measurements like discussed above might confirm or refute my theories below.
    - i use stepper motors on my rig. One of the reason is to have a relatively slow and constant speed movement. This minimizes the rig oscillation when moving.
    - also with stepper motors when they are not moving, they are locked in place compared to servo which might be fighting to maintain their position, especially if this rig is not well balanced.
    - with a 1.5 pounds rig, it's fine in light wind but in higher wind speed, i'm tying large metal washers to the picavet cross to increase the weight. This helps minimizing the rig movement.
    . When i first started with kap the pulleys mountings were rigid and not ball bearing pulleys. When i switched to pecabe blocks the oscillations of the rig seemed to dampen must quicker. These little pulleys diffuse the energy more efficiently.
    - on my second rig, the pan axis was spring loaded as i was trying to eliminate some mechanical slack. I can't tell if it was better or not but with actual measurements i should be able to tell.

    Tom, how much work is it to extract data from a video?
  • Yvon, thanks for bringing this thread back up. I haven't touched this in a while, unfortunately. But I had started writing it all up with the intention of sharing everything here, similar to Mark's articles on 3D point cloud extraction and photogrammetry.

    Rather than drag that out, here are the steps I use for extracting data:

    1 - Try to use the same camera setup for all tests. You can take out some camera characteristics during processing, but it's a fairly approximate process. Better to use the same camera every time.
    2 - Know what the camera setup is. If I'm using a video/still camera, I take a still image before each video test so I can get the lens focal length information out of the EXIF header. The rest can come out of the DPReview database.
    3 - Shoot a video. If you plan to do FFT extraction, try to get several minutes of representative video. That is to say, 20 seconds of good video surrounded by five minutes of hauling and letting out line is not representative. Get the camera to altitude, then leave it there for several minutes, then haul back down.
    4 - Run the video through VirtualDub and Video Deshaker. By default Video Deshaker tries to save its log file in a location that doesn't exist on my computer. I have to give it a real file location and name every time. You can find this information in the how-to on the Deshaker web site. The Video Deshaker log file is a CSV file that saves information on a frame-by-frame basis. The format changed with the most recent version of Deshaker, so I won't go into what each column is for. The Deshaker site has this information. One word of warning: If your camera has rolling shutter, you will have to enter a number to compensate for the rolling shutter effect. The author of Deshaker has several camera models in his list, and what their rolling shutter numbers are. He also has instructions for how to calculate your own based off of whatever camera you're using. Numbers for X, Y, and Z motion are stored in different columns of the log file depending on whether you are compensating for rolling shutter effect or not.
    5 - At this point I pull the file into Excel, though you can also pull it into some other analysis package. Open Office works fine, too.

    Where you go from here depends on what you're trying to find and how esoteric you want to get. I tend to get pretty esoteric. So my next couple of steps are:

    6 - Pull out the X (pixels), Y (pixels), and Z (degrees) of motion columns from the Deshaker log data and stick them in a new worksheet.
    7 - Convert X and Y to degrees of rig rotation. This is where you need to know the physical size of your chip and the focal length of your lens. This is an approximate step! I think the only real way to do this is to put your camera on a rotary stage or some sort of precision turntable and shoot a video where you rotate it by discrete amounts or at discrete speeds. You basically feed it a known rotation and see what Deshaker says it is. Voila, that's your conversion factor.
    8 - Convert frame number to time. I shoot my videos at 60fps, but 30fps should be fine for most purposes. The fastest feature I've seen in an FFT was about 5Hz.

    At this point you have angular rotation rates of each of the three axes as a function of time. Already there are some statistics you can do with this data, such as find your maximum rotation rate in each axis (what's the worst your rig ever has to deal with?), the median rotation rate in each axis (what's does your rig see most of the time?), and the mean rotation rate in each axis (what, on average, is your rig dealing with on a given flight?) You can also calculate the standard deviation to tell you how messy your numbers are. I've found the stdev value is a better indicator of turbulence than the quality of the data itself. The data appears to be quite good.

    Keep in mind when doing these statistics that you will have both positive and negative rates as your rig oscillates around a zero point for each axis. So unless you take this into consideration, your mean angular velocity will always come out close to zero. I go ahead and convert my angular rates to absolute values before doing statistics on them to avoid this.

    One other thing to keep in mind is that if you run the analysis on the entire file, you're getting your launch, the steady portion of the flight, and the recovery. Before doing any statistics, start chopping your data up into discrete blocks: "Launch" "Flight" "Recovery" or somesuch. Otherwise your numbers may not be good indicators of what's really going on.

    And there's more!

    9 - Integrate the angular rates (not the absolute value of the angular rates) to get angular positions as a function of time.
    10 - Find the max and min for each axis. This is the maximum excursion you got in each axis.

    A brief aside:

    Oscillations in a rig can hurt you in two ways. The first deals more with large-excursion low-speed oscillations. It makes it hard to know which way your rig is pointing. You see this in Picavet rigs when the clips are set too close together and the pan axis wobbles back and forth slowly over and over and over. In a pendulum rig you see this when the rig rocks back and forth about the kite line over and over. The angular rates may be slow, so it won't necessarily cause blurries, but it makes it hard to tell which way your rig is pointing at any moment in time.

    The second way oscillations can hurt you are small-excursion high-speed vibrations. These are what contribute most to image blur. These show up as "shakies": the kite pumping the line so the rig wobbles fore/aft, a loose pan axis where the rig bounces back and forth, or a shaky servo that just can't settle down.

    Finding the maximum and minimum in each axis will tell you how bad your large-excursion oscillations are. It's possible to have relatively slow motion, say two degrees per second or so, that wobbles the camera by +/- 15 degrees. Finding the maximum angular velocity for each axis will tell you how fast that axis is moving at its worst. It's possible that these oscillations won't have large excursions, maybe a few degrees at most, but they may make the rig move by ten or twenty degrees per second or worse. Both of these bits of information are important to know.

    Now is where the real fun starts:

    11 - In Excel, and in most of these data analysis packages, you can take a Fourier Transform of a time sequence. The data we generated in step #9 constitutes a time sequence showing the angular orientation of the rig in three axes as a function of time. If you run an FFT on each axis in turn, you can find out the period of the oscillations that make up the motion in that axis, and how much power is coming from each.

    Of all the steps, this is the most tedious. I'm sure other packages are different, but setting up an FFT in Excel is a real pain in the rear. I'm certain life could be made much simpler with a couple of macros, but I haven't been driven to the point where I've written them. Up until this point everything can be done by setting up a spreadsheet that does all your calculations for you. Dump the Deshaker file in, and data comes out the other end (well... once you pick the segment of the video you want to work on, that is.) Doing the FFTs is a pain.

    It's also the hardest one to get data out of. Here's why: If you're sampling at 60Hz, the fastest frequency you can analyze is 30Hz. The slowest is 30Hz/sample size. (I'm doing this from memory, so if I screwed that last bit up, bear with me.) Data taken over 34 seconds, give or take, will give you a 2048 element FFT, and result in a sampling interval of 30/2048, or about 0.03Hz. This sounds pretty tiny until you start to look at where the bulk of the information in a KAP FFT lives: below 1Hz. That means the bit you're interested in is only broken up into 33 slots, give or take. To double that you need to double the amount of data, and increase the processing time by a factor of four or more. To really understand what's happening below 1Hz, it would be better to bump that by a factor of ten or twenty. I'm not sure Excel will process an FFT that large. And with data sets that big, spreadsheets really aren't the way to go anyway.

    There are other FFT packages out there. One in particular is a command line program that'll just take a series of numbers and generate the corresponding FFT. I'd like to see how hard it would be to set this up since it would largely automate the FFT part of the processing. It also has very few restrictions on the size of your data set. (It's also supposed to be more efficient code than what Excel uses, so it should run faster as well, thank goodness!)

    Anyway, if you haven't picked up on it yet, this is where I start waving my hands and saying, "It's slow, it's not fun to do, but it's where the interesting stuff happens." So this is where I slowly grind to a halt.

    Hope this helps.

    Tom

    P.S. I'm currently playing around with a viscously damped pendulum suspension idea. I hope it goes a little further than my gimbal suspension did. Anyway, it's lots of fun.
  • edited November 2011
    While out flying today, I thought I
  • edited November 2011
    Using Excel is a pretty cool way to get the frequency data from the Sensor Data output files. I've aheard that Matlab is a little more user friendly and interactive, but it's much more expensive...

    After comparing several videos shot from the rig and iPod Sensor Data output files, you can begin to correlate what your eyes are seeing in the videos with the FFT frequency data. I guess it's somewhat intuitive that you'll see lower frequency movements from the rig swinging around and higher frequency movements from the line vibrations but it is interesting to see them in both forms (video and frequency domain). It would be interesting to comapre the deshaker frequency data and the the Sensor Data output data.

    I've found that viscus pendulum damping significantly reduces and smooths the lower frequency movements (< 1Hz) in the pitch axis, but has insignificant impact on the roll axis or higher frequency line vibrations (as expected).

    Tom, Looking forward to seeing the results you get with your rig.

    YvonH, stepper motors do seem like a good way to eliminate motor vibration while holding position. Another way to do this with RC servos is to turn off the pulses to the servo prior to taking a picture. What kind of stepper motors do you use and how much power do they draw?

    Mike
  • edited November 2011
    I've aheard that Matlab is a little more user friendly and interactive, but it's much more expensive...
    There are free alternatives, though. One is Octave (more or less a clone) ant the other is Sage Math

    rgds
    \Seb
  • edited November 2011
    Mike
    The stepper i'm using are little motors for Anaheim Automation in the United States.
    www.anaheimautomation.com

    Not very strong but enough for my kap rig. When i first test my rig with these, the rig ran for 8 hours using 4 AA NIMH batteries.

    third kap rig discussion

    I'll try to do a sample video when the wind is acceptable to kap.
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion