Saturday, February 21, 2015

GT2 Belt Drive Conversion of Printrbot Simple (Wood late 2013 model)

One of the defining characteristics of the 2013/early 2014 versions of the Printrbot Simple was the use of Kevlar fishing line for the motion transfer on the X/Y axis. A rubber hose gets superglued to the stepper shaft, a Dremel sanding wheel gets glued to that, and the fishing line gets several wraps around it. It kept cost down (the original was just shy of $300 in kit form) and it works surprisingly well. Mine has held up for quite a lot of printing over the 13 months I have had it running.

However, it did have a couple of disadvantages. It required tightening every now and then. It can result in a loss of precision, because the fishing line can walk back and forth on the drum. And frankly - it's just not very dignified looking. Here is a view of the X axis drive with the bed removed.

Thanks to the work of Thingiverse contributor iamjonlawrence there is a printable conversion to GT2 belts for both the X and Y axis. Newer Simple models come with belts, though they cost more than the original Simple kit did.

Y axis
X axis

Jon is a mechanical engineer, and it shows in his hobby work. He has released a number of upgrades for various versions of the Printrbot Simple, and it is accompanied by professional drawings and detailed bills of material. The parts in these kits were very well thought out and fit perfectly, I highly recommend his work.

I printed two sets of the parts - I was concerned that I would have my printer torn apart, and if I messed up a part I would not be able to print replacements. This turned out to not be necessary, but I still think it is worth doing.

I made sure my printer was calibrated well before printing the parts. The tolerances are snug, but if your printer is printing accurately it will fit with only minor brushes with a file to remove burrs or other loose material from the print.

In addition to the McMaster Carr part numbers called out on Jon's BOM, I used the following components from Amazon:

808 Bearings
Belts and pulleys (there is plenty of belt for this conversion - I had enough left over to replace one of the belts if I ever need to)

Note that FLDM printers tend to print holes and slots slightly small. I calibrated mine to accurately print outside dimensions, and just drill my holes to the right size. With the hardware specified on the BOM, a 7/64 bit will make the hole sized nicely for the screw to thread into. A 1/8" bit will allow the screw to pass through smoothly.

You'll need to recalibrate the X and Y axis since the pulleys are slightly larger than the original drums.

Also, have extra zip ties handy, you'll need them to put things back together.

Procedure - Y Axis

First, I clipped the zip ties holding the Y axis carriage to the motion rods. I then removed the stepper.

Next, I installed the new motor plate and bearings, and aligned the pulley.

I fed the belt in and checked motion, and secured one end of the belt to the stop.

The stop gets belted on.

The second stop gets attached. 

Securing the belt to the tension block is a little tricky. I had to remove material from the slot that the belt passes through to let it pass through twice. The drawing shows clearly that the belt should just fit through the slot when folded back on itself. A short length of filament acts as a pin to hold the filament in place. There are detailed shots of this in the X axis section.

Tension is adjusted by turning the screws in the tension block. At this point, I connected to the printer and tested the motion. All looked good, so I moved on to the X axis.

Procedure - X Axis

The X axis is more involved because you have to install a replacement motor mount plate for the X axis stepper. This is not a trivial process, but it went pretty smoothly.

First, the bed is removed and the X carriage is removed by clipping the zip ties holding it in place.

The side opposite the control board is removed.

The bottom plate can now be pulled free and the X axis motor mount is removed.

The new bearing plate is assembled.

Carefully align the pulley with the bearings. They are a snug fit, but they do fit, and don't allow any slop when assembled.

Install the new motor plate and reassemble.

The new belt ends are held in place with the zip ties securing the carriage to the motion rods. As the drawings call out, the carriage is flipped over and a new hole drilled for the X axis end stop screw.

Here is a detail shot of how the belt tension blocks work. I had to open the slots a bit with an exacto knife, just enough to pass the belt when it is folded back on itself.

Test it! I had to remove a little material from the carriage to get it to run smoothly, just rounding over an edge.

I had to modify my back clips that hold my heated bed on. I just bent and cut office clips into a z-shape. Details on the heated bed installation is here.

Initial test prints look really good. I am in the processing of recalibrating the X and Y motion in the Printrboard, since the pulleys are slightly larger than the original sanding drums. I will post the final values once they are determined.

Update: X and Y values for M92 are both just a hair above 80. I have mine printing to within .001" on a 2.000 inch square test model.

In Repetier Host, the GCode commands can be entered into the GCode command box on the manual tab, shown below. Enter the command you want to run and hit the Send button.

A good overview of the math is available on this excellent blog entry by Zheng3.

Sunday, January 18, 2015

Audible Frequency Chirp Sonar with the Stellaris Launchpad

Over the last year I've been working towards an underwater sonar system for ROVs and surface boats. In order to learn the basic signal processing required to detect the echoes, I initially got a simple sonar working in air with a desktop conferencing USB speaker/mic running on Windows. A writeup, including source, is here. That article describes the algorithms used in detail and would be a good read if you want the details of how this works.

The next logical step seemed to be to get it working on a microcontroller. There are plenty of low cost ultrasonic sonar modules available that work really well in air, but the idea was to work towards getting a sonar that worked in water. There are currently no low cost sonar modules for hobby use in water. Additionally, the low cost modules only give one echo - with a signal processing approach like this, you get a series of echoes that may convey more information about the environment. As an example, a boat floating above a school of fish could detect both the fish and the bottom.

I selected a Stellaris Launchpad because of the high speed analog to digital converters (ADC) and the 32 Kof RAM. At the required sample rates, the Launchpad has just enough RAM to send a chirp, and then record a fraction of a second of audio so that the echoes can be determined. Higher frequency sound will require a higher sampling rate, so I may need to switch to a Teensy 3.1, which has 64K of RAM.

A chirp waveform is computed and sent to a small piezo speaker driven by a simple transistor circuit. The piezo supply voltage (VCC in the diagram below) is provided by 3 nine-volt batteries in series to obtain 27V. This diagram shows how it is connected. This is not my diagram - I found it online, but I don't have a reference. If this is yours, please drop me a line.

The return echo is detected by a small amplified microphone from Adafruit. I like this module because it has an integrated level shift. Rather that swinging from -V to +V, it is shifted to 0 to +3.3V so that it can be connected to an ADC. It's very convenient.

A couple 3D printed parts hold it all to the board just to keep it pointed in the right direction.

The chirp is sent, and the audio immediately starts recording to catch the echo. The same correlation function as used in the previous article is used to pull the echoes out of the recorded audio. The intensities of the correlation function are sent through the debug port to the PC so that it can be plotted. 

I need to work on optimizing the echo detection code - currently it works on the audio from each pulse for 4 seconds or so. Also, the power output of the audio transducer is very low, so range is pretty limited. It has an effective range of between 3-9 feet. Closer than 3 feet, the echo is hard to pick out of the noise produced when the pulse is sent.

As in the original experiments with the speaker/mic, the results are plotted with a simple Python program set up similarly to a fishfinder display. The results of a test run are shown below. Source for the Python display is modified from code from the previous article. 

Source code for the Launchpad is given below.

Next steps are to work on getting transducers working under water and increasing transmit power. I've made a simple hydrophone to test transducers with - update coming soon.

Audible Frequency Chirp Sonar with the Stellaris Launchpad from Jason Bowling on Vimeo.

#include "inc/hw_ints.h"
#include "inc/hw_memmap.h"
#include "inc/hw_types.h"
#include "driverlib/sysctl.h"
#include "driverlib/interrupt.h"
#include "driverlib/gpio.h"
#include "driverlib/timer.h"
#include "driverlib/debug.h"
#include "driverlib/fpu.h"
#include "driverlib/pin_map.h"
#include "driverlib/rom.h"
#include "utils/uartstdio.h"
#include "driverlib/adc.h"
#include "inc/hw_timer.h"
#include "inc/hw_ints.h"

#define numSamples 6000 //size of receive buffer
#define sampleRate 80000 //sample rate at which the audio for sending and receiving is performed
#define pulseLength .0015  //transmitted pulse duration in seconds

#define chirpStartFreq 5000  //in Hz
#define chirpEndFreq 8000  //in Hz

int chirpLength = 0;

 unsigned long g_sampleCounter = 0;
 unsigned long ulADC0_Value[1];
 unsigned long rxBuffer[numSamples];
 int pulse[900]; //stores waveform for sending and comparison. Only need integers for square wave. Could do with bits to save memory
// must be at least pulseLength * sampleRate

 //double output[501];

void initConsole()
  // Initialize the UART at 115200.
     //ROM_GPIOPinTypeUART(9600, GPIO_PIN_0 | GPIO_PIN_1);
     UARTprintf("\nConsole Initialized. System clock is %4d\n", SysCtlClockGet());


void initADC()
   // The ADC0 peripheral must be enabled for use.

         // For this example ADC0 is used with AIN0 on port E7.


         // Select the analog ADC function for these pins.


         // Enable sample sequence 3 with a processor signal trigger.  Sequence 3
         // will do a single sample when the processor sends a signal to start the
         // conversion.
         ADCSequenceConfigure(ADC0_BASE, 3, ADC_TRIGGER_PROCESSOR, 0);

         // Configure step 0 on sequence 3.  Sample channel 0 (ADC_CTL_CH0) in
         // single-ended mode (default) and configure the interrupt flag
         // (ADC_CTL_IE) to be set when the sample is done.  Tell the ADC logic
         // that this is the last conversion on sequence 3 (ADC_CTL_END).  Sequence
         // 3 has only one programmable step.

         ADCSequenceStepConfigure(ADC0_BASE, 3, 0, ADC_CTL_CH0 | ADC_CTL_IE |

         // Since sample sequence 3 is now configured, it must be enabled.
         ADCSequenceEnable(ADC0_BASE, 3);

         // Clear the interrupt status flag.  This is done to make sure the
         // interrupt flag is cleared before we sample.
         ADCIntClear(ADC0_BASE, 3);


 //configure 32 bit periodic timer
  TimerConfigure(TIMER0_BASE, TIMER_CFG_32_BIT_PER);

void startTimer()
 unsigned long ulPeriod;

 //set timer rate
   ulPeriod = (SysCtlClockGet()/(sampleRate*3));
   TimerLoadSet(TIMER0_BASE, TIMER_A, ulPeriod -1);

   TimerEnable(TIMER0_BASE, TIMER_A);

void stopTimer()

         // Disable the Timer0A interrupt.

         // Turn off Timer0A interrupt.
         TimerIntDisable(TIMER0_BASE, TIMER_TIMA_TIMEOUT);

         // Clear any pending interrupt flag.
         TimerIntClear(TIMER0_BASE, TIMER_TIMA_TIMEOUT);

void initLED()
 //enable GPIO pins for LED

void initPiezo()
 //enable GPIO pins for piezo

void generateChirpWaveform()
unsigned long int freq = chirpStartFreq;
int value = 1;
unsigned long int start = 0;
unsigned long int stop = 0;
unsigned long int counter = 0;
int count = 0; //temp
int sampleComplete = 0;

//step through array from 0 to 1/2*freq, setting value. Invert value. Proceed to 1/2*freq, setting value. Calculate new freq. Repeat.
//values stored in pulse[]

while (!sampleComplete)

stop = (int) start + ((1.00 / (freq * 2.00)) * sampleRate);

for (counter = start; counter < stop; counter ++)
 {//check position and set sampleComplete when at end of chirp
 if (counter < pulseLength * sampleRate)
  pulse[counter] = value;
  sampleComplete = 1;
//invert waveform value to be set for next half of cycle
if (value == 1)
 value = 0;
 value = 1;

//calculate new freq based on position in pulse. Ratio of stop/chirpLength vs freq increment / chirpEndFreq
freq = chirpStartFreq + (((chirpEndFreq- chirpStartFreq) * stop)/(pulseLength * sampleRate));

//position for writing next half cycle
start = stop;
chirpLength += 1;
} //end while


void playChirp()
long int count = 0;
long int endSample;

endSample = sampleRate * pulseLength;

 while (count < endSample)
  if (pulse[count])

  SysCtlDelay((SysCtlClockGet() / (sampleRate * 3)));
  count ++;


void ftoa(float f,char *buf)
 //code from
 int pos=0,ix,dp,num;
    if (f<0 data-blogger-escaped-buf="" data-blogger-escaped-dp="0;" data-blogger-escaped-f="" data-blogger-escaped-pos="" data-blogger-escaped-while="">=10.0)
    for (ix=1;ix<8 data-blogger-escaped-f="f-num;" data-blogger-escaped-if="" data-blogger-escaped-ix="" data-blogger-escaped-num="">9)
            if (dp==0) buf[pos++]='.';
buf[pos - 1] = '\0';

void processSample()
int a = 0;
int bufferStartPosition = 0;

double normalizedSample = 0.0;
double windowSum = 0.00; //cumulative sum for this window
double temp = 0.0;

char buffer[20] , *str;
str = buffer;

//audio values in rxBuffer are shifted integers. Normalized audio is -1 to 1. Recorded samples are 0 to 4096
//Divide by 4096 and subtract .5 to shift to this range.

//stored pulse is stored 0 to 1. Multiply by 2 and subtract 1 to normalize.

while (bufferStartPosition < ( numSamples - chirpLength))
for (a = 0; a < chirpLength; a++)

 normalizedSample = (rxBuffer[bufferStartPosition + a]/4096.0) - .5;
 //temp = normalizedSample * ((pulse[a] * 1.00));
 temp = normalizedSample * ((pulse[a] * 2.00) - 1.00);
 windowSum = windowSum + (normalizedSample * temp);


bufferStartPosition += 1; //increment bufferStartPosition to move window
windowSum = 0.00;
//end outer loop


int main(void)
//initialization complete

 int pingCount = 0;

 while(pingCount < 1000)
  //record audio
  pingCount = pingCount + 1;

 while (1) {}


void Timer0IntHandler(void)
 // Clear the timer interrupt

 // Trigger the ADC conversion, Wait for conversion to be completed.
 ADCProcessorTrigger(ADC0_BASE, 3);

 //everything after this can be moved out of the ISR
 //set a flag and poll for it in main()
 while(!ADCIntStatus(ADC0_BASE, 3, false))

 //take an ADC reading
 ADCIntClear(ADC0_BASE, 3);
 ADCSequenceDataGet(ADC0_BASE, 3, ulADC0_Value);

 if (g_sampleCounter < numSamples)
  rxBuffer[g_sampleCounter] = ulADC0_Value[0];
  g_sampleCounter = 0;



Saturday, December 27, 2014

Pebble Smartwatch Review

I toyed with buying a Pebble for several months before I actually pulled the trigger. I bought it to solve a specific problem, and then discovered it solved some other problems I had not actively been working on a solution for. It also has a few drawbacks. Here's what I've learned after using it for a couple months.

I keep my phone silenced at work, so that it does not interrupt meetings. However, I sometime miss the silent alerts - I just don't feel the vibration when a call or text comes in. Pretty much the only people who call or text me during the work day are family, and if they do, they need to reach me. Even when I do feel the alert, I consider it bad form to fish my phone out of my pocket to see what it is - it's not very polite at best, and can give the impression you are ignoring a boss or customer at worst.

I knew the Pebble could alert me to incoming texts and phone calls. That, alone, was worth the gamble. The price dropped to $99 after the Android Wear watches came out, and I decided it was time to give it a shot. I spoke with a couple people who had the Android Wear watches, and although they are impressive technology, I was put off by the high price and very short battery life. A coworker indicated they don't even last a full day, and I didn't want that. So I bought the standard Pebble.

The good...

  • The battery life is outstanding. It will run for an entire week, the way I use it.
  • The alerting works very well. The watch vibrates against your wrist and it's hard to miss. It's quiet and subtle.
  • You can configure which alerts you want. I have mine set to only alert on texts and phone calls, but not things like Facebook notifications. That way, if my phone buzzes but the watch does not, I know it can be ignored until a convenient time.
  • It's waterproof.
  • The application on the phone is a convenient way to load apps and watch faces. 

The not so good...

  • The display driver appears to have some bugs. Sometimes it will start to have distortion or speckles across the display that range from annoying to completely obscuring the content. A restart normally fixes it and it doesn't come back for a while. UPDATE: This is a known hardware issue, caused by intermittent connection between the LCD and mainboard. A search on "pebble screen tearing" brings up lots of results that indicate the fix is to contact support and RMA the watch. I intend to do this.

  • The display darkens significantly in cold air, less than 30 deg F. 
  • The stock wrist band on the low end version is made of some sort of rubber, and it is clammy against the skin if it gets damp. It can easily be replaced, but it stands out against the rest of the device.
  • The pedometer function is wildly optimistic. If I zero it and then hop in the car and drive for 45 minutes, I will have logged 2000 steps by the time I get out of the car. I compared it to an actual pedometer and it was about 50% high by the end of the day.
  • Instead of pulling out your phone, which makes you appear distracted, you now tend to glance at your watch, giving the impression you are in a hurry.

The completely unexpectedly useful....

  • Having a silent alarm clock that wakes you by gently pulsing your wrist is extremely handy if you want to wake at a different time than your partner.
  • The app Sleep As Android integrates seamlessly with the Pebble and tracks your sleep by tracking your movements. If you tell it when you need to be up, along with an acceptable time window, it will watch your sleep and nudge you awake when you are in the shallowest part of your sleep cycle. I find I awake more refreshed and alert. 

  • Additionally, I really like being able to see who is calling and texting if I can't easily get to my phone. Examples are cold weather, or with the phone in a dry box while kayaking.
  • The ability to show different data on the watchface, including time zones or weather, is pretty handy.
  • The music playback controls are pretty cool, and work well with Google Music. It does not appear to work with Amazon Music.

It's not as dramatic a change as, say, the laptop or smartphone, but it's inexpensive and handy. I never miss important calls or text any more, and the sleep tracking and data displays are modest time savers. Overall, I am quite pleased with it.  

Saturday, November 22, 2014

Syma X1 Camera Mount

I think the Syma X1 is about the best fun for the money in RC, ever. It's indestructible, flies well indoors or outdoors in calm air, is agile and fun to fly. It costs $30 or so on Amazon. I love it.

I tried a couple different ways of mounting a camera under it. The main challenge with this is to get it securely attached, since there is an oddly shaped battery assembly below the main platform. It has to be very light, since the quad doesn't have a lot of cargo capacity.

 This is the best I've come up with so far - it uses HobbyKing vibration mounts and some 3D printed parts. I printed them in ABS for light weight and a bit of flexibility, and I recommend that if you try it. The printed ring fits snugly around the circular frame that the control board electronics sit on and is attached with a couple zip ties. I had to slightly drill out the two unused screw holes in the circular frame.

I used an older 808 camera, which it lifts fine. There might be light enough FPV gear out there now, I'm not sure. There is still some vibration in the video - I'll probably try a little foam between the camera and platform to see if I can get rid of that. It's a fairly low frequency, since it's not causing "jello" (rolling shutter).

STL files hosted on Thingiverse here

Friday, November 14, 2014

Bare-bones HTTP Image Server in Python

I'm looking at what it would take to integrate the IP camera functionality used by my rover into the main control code, rather than using a separate app to serve up the JPG frames from the camera I've never written a web server before, so I put together a very simple program in Python to figure out the mechanics of sending a JPG file to a requesting HTTP client. It looks like it would be pretty simple to add to the robot code, since the camera callback returns the JPG as a byte array.

This code listens on a socket for an HTTP request - any HTTP request - and completely neglects to check or sanitize the input. It then serves an image back. It works for testing and understanding how things work down at the socket level, and is absolutely unfit for any other purpose. :-)

Tested in Chrome and IE.

#!/usr/bin/env python

import socket

host = ''
port = 8080
backlog = 5
size = 8096

# Read the entire file as a single byte string
with open('test.jpg', 'rb') as f:
    reply =

while 1:
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)


        client, address = s.accept()
        print "Got client connection"
        clientReq = client.recv(size)
        print clientReq

        replyLength = len(reply)
        headers = "HTTP/1.x 200 OK\r\n"
        headers = headers + "Content-Length: " + str(replyLength) + "\r\n"
        headers = headers + "Connection: Keep-Alive\r\n"
        headers = headers + "\r\n"

        client.send(headers + reply)

Saturday, October 18, 2014

Android/IOIO 3D Laser Scanner

Once you have a 3D printer, it's a logical jump to start thinking about what you can repair with it. I had read articles on simple DIY laser scanners using webcams and a line laser, and decided it would be a fun project to build one and see how well they work.

Rather than using a webcam, I opted to use an Android phone (Galaxy Nexus) that I had as a spare, and an IOIO board that I already owned. I reasoned that the phones had high quality cameras, and that it should allow for higher resolution scanning than a cheap camera. I wrote a simple program on the phone to take the photos and drive a stepper motor to turn a turntable.

I printed this simple printable turntable design and wired up a EasyDriver stepper driver to the recommended stepper motor, wired for .45 degree steps. I found that things tended to slip on the slick turntable surface, so I found a grippy rubbery material similar to shelf lining in the kitchen section.

I placed a line laser on a printed adjustable mount that I designed  and set it at 45 degrees to the camera angle. The line laser from Adafruit makes a nice line.  A phone mount modified from an existing GameClip for the Nexus served as an easy mount for the phone.

I wrote a simple Android app based on the HelloIOIO example application that takes a picture, advances the stepper motor, and takes another. The saved images are processed  by a Python script that tries to identify the center of the laser line, which is surprisingly wide. Treating the center of rotation (the motor shaft) as the origin, it used trigonometry to figure out where the beam as intersected the part and then rotates the resulting points into their position based on how much the turntable has been turned.

This guy has an outstanding explanation of how the math works. I've put the Python script at the end of the blog post in case you are developing something similar.

The resulting point cloud in x,y,z format is loaded into Meshlab. The point cloud is turned into a part with a surface using the method described here.

So. How well does it work? Can I scan parts and then easily run them off on my printer?

It depends entirely on the shape of the object. Here's some examples.

Where the laser line strikes the part, it makes a nice smooth mesh. However, for most parts that are interesting, one area of the part will often shadow another, and detail in that area will be lost. Take a look at the area under the chin of the knight for one example. You also get shadowed areas as the part rotates when one part of the target sticks out a bit. See how the side of the face on the right side of the image below does not show the beam? That's a big chunk of detail that is lost. The mesh gets ugly there, and it would require work in a program like Blender to fix.

I did experiment with two lasers, one of each side of the part, to try to reduce that. I think this is a good approach, but it requires a significantly more precise rig than I made. When I tried to align the resulting two point clouds, I found that the two meshes were slightly different sizes. It appears to require very precise alignment and a stiff setup to keep it all straight if you are doing that. I scrapped this approach - maybe for a later build, but it would be a total redesign.

I also found problems with scanning parts that are glossy or otherwise reflective. The laser beam hits and scatters, rather than making a nice tight beam. Detail in that area is lost, sometimes resulting in the loss of entire faces. I tried scanning a part from a photocopier, and much of it was lost to this.

Finally, parts that were more square than oblong tended not to be illuminated by the beam, and detail was lost. An extreme example is shown in this scan of a wood block, where the entire top was lost.

These issues can be somewhat improved by changing laser angle, laser height, camera distance, etc.  However, it usually involves a tradeoff - you can pick one area to be well illuminated, but at the expense of another. They do work, and they are a very interesting project to play with. If you want to scan in models to do digital sculpting in Blender, as a starting point, they would work well. If the goal is to scan parts for replicating with a minimum of manipulation of the resulting model, it appears to require a more sophisticated approach. Dual beams would help, but you would still have shadowing issues. I think that to really do it right would require scanning the laser up and down while also rotating the part, which significantly complicates the mechanism. 

I enjoyed building it, but the limitations inherent in a single fixed beam are such that most anything I would scan would require extensive work in Blender to to make usable. I may come back to laser scanning at some point, but for now I think I'll just work on improving my skills in CAD. 

If you are going to build a scanner like this, the following things help a lot:

- level the platform you are building on
- level the turntable to avoid "wobbles" in the reconstructed part. I did this by shimming the stepper motor in the turntable mount
- make sure the line laser is at 90 degrees to the turntable surface

A system like David LaserScanner that allows you to scan the beam over the whole surface probably helps a great deal. I may try that at some point, but at this point other projects are calling.

Code for the reconstruction script follows. There is a lot of room for improvement.


from Tkinter import *
from PIL import Image
import math

thresholdBrightness = 600;
centerLine = 1368;
leftBound = 362
rightBound = 2287
upperBound = 996
lowerBound = 1368

def findBrightestToRight(im, y):
 rowBrightestPosition = 0
 rowBrightestValue = 0
 global centerLine
 leftEdge = -1
 rightEdge = -1
 pix = im.load()

 if y > lowerBound:
  return -1

 if y < upperBound:
  return -1

 for x in range(centerLine,rightBound,2):
  pixel = pix[x,y] #get rgb value of current pixel
  pixelBrightness = pixel[0] + pixel[1] + pixel[2]
  if pixelBrightness > rowBrightestValue:
   rowBrightestValue = pixelBrightness
   rowBrightestPosition = x

 if rowBrightestValue > thresholdBrightness:
  return rowBrightestPosition
  return -1

#todo: handle return of -1 properly


maxSteps = 0;
yPixelsPerMM = 1.00
xPixelsPerMM = 1.00
thetaDegrees = 27.0 #angle formed by laser line to camera
thetaRadians = math.radians(thetaDegrees)
rotationAngleDegrees = 0;
rotationAngleRadians = 0;
degreesPerStep = .45
result = ""

numberOfFiles = 800;

thetaRadians = math.radians(thetaDegrees)

#make blank files
f = open('scannerOutput1.asc', 'w')

for currentFile in range(0,numberOfFiles):
 print "Processing file " + str(currentFile) 
 filename = str(currentFile) + ".jpg"
 im = 
 imageWidth = im.size[0]
 imageHeight = im.size[1]
 rotationAngleRadians = math.radians(rotationAngleDegrees)
 print rotationAngleDegrees
 result1 = ""
 for y in range(0,imageHeight):
  rowBrightestPosition = findBrightestToRight(im, y)
  #print y
  ##invert y
  yPosition = ((imageHeight * 1.00) - y) * yPixelsPerMM;   #this will need to be scaled
  xPosition = ((rowBrightestPosition - centerLine)/(math.sin(thetaRadians))) * xPixelsPerMM; 
  zPosition = 0.00; 
  ##we need to rotate the x,y,0 position to it's final position based on stepper motor angle
  ##rotate about y axis to proper position according to current part rotation angle
  rotationAngleRadians = math.radians(rotationAngleDegrees)
  zRotated = (zPosition * math.cos(rotationAngleRadians)) - (xPosition * math.sin(rotationAngleRadians));
  xRotated = (zPosition * math.sin(rotationAngleRadians)) + (xPosition * math.cos(rotationAngleRadians));
  yRotated = yPosition;
  if rowBrightestPosition != -1:
   result1 =result1 + str(xRotated) + "," + str(yRotated) + "," + str(zRotated) + "\r\n";

 rotationAngleDegrees = rotationAngleDegrees + degreesPerStep
 f = open('scannerOutput1.asc', 'a')

print "Done."

Thursday, July 31, 2014

Failed attempts to get HC-SR04 sonar module working underwater

Over the last few months I've been looking at building a sonar sensor for underwater robotics. I've been mostly focused on learning some basic signal processing to detect the echos. As documented in a previous post, I've gotten it working in air on a PC using the sound card, and have also recently gotten a TI-Launchpad based version working in air. (More on that later).

However, since the HC-SR04 is so inexpensive and works so well in the air, I could not resist trying to make it work in water. It would be cheaper, easier, and use less power if I could make it work.

Previous reading on making hydrophones indicated that simply dunking a microphone in a film canister full of mineral oil would effectively couple it to the water around it, and let you hear the sounds in the water. That's what I intend to do with the Launchpad based digital sonar. Since mineral oil is non-conductive, I figured it was worth a try to seal a HC-SR04 in a container of mineral oil and see what it did.

The HC-SR04 emits a short pulse of ultrasound and then waits for the first echo. It then outputs a pulse with a length corresponding to the time it took to hear it. From that, you can calculate the distance to the target.

This has some advantages and disadvantages. It's really easy to use and process the data. However, you only get one data point - you only hear one echo. The digital sonar approach gives you all the echoes back, not just one, with relative intensity information, like this:

I figured that the HC-SR04 must be waiting a small amount of time after sending the initial pulse to prevent it from triggering on the ringing of the sending transducer. If I could get the pulse out of the pill bottle before that window, it should work.

I first got the HC-SR04 working in air with a Stellaris Launchpad. I then sealed up the HC-SR04 in a medicine container full of mineral oil.

The module worked in the oil, but didn't return data that I could correlate in any useful way with distance to target.

With the sensor out of the water, it repeatedly returned a very small value (100 or so).

With the sensor in the water, it alternated between sending an extremely small value (~50) and a very large one (600,000+). It did this pretty much regardless of what it was pointed at, and quite regularly. 50, 600000, 50, 600000, 50..... I'm not sure what was happening, exactly.

I've decided to give up on this approach and return to the Stellaris Launchpad-based digital signal processing approach. That will give me a lot more information since I can look at all the echoes graphically. Bummer. It would have been really cool if it worked.