Servo accuracy


I am using the UltraBorg to drive a servo (a large servo from your full UltraBorg kit) on which I have directly mounted a webcam. This is used to scan the camera through 180 deg looking for a particular target. Once found the servo is adjusted until the target is centralised in the frame and then Diddy turns through the angular difference between the servo position and straight ahead, ready to drive to the target.
This all works well (although it's taken me a while to get there) but when the servo is sent back to the zero position (straight ahead) it does not do it accurately, stopping a few degrees either side of the zero position according to which direction the servo was turning from. Reading GetServoPosition1 confirms that the Ultraborg "thinks" that it has reached the correct position but that is not quite the same position as originally set by SetServoPosition1(0.0)
In a mechanical situation I believe this would be called "backlash". Is there any way that I can correct this in software? Or am I expecting too much, perhaps servos are just not this accurate anyway?

piborg's picture

You are probably experiencing a problem very similar to backlash given the repeatability.

In this case the servo applies power to the motor internally to correct its position.
The further it is away from the target the more power it tends to apply.
The trouble is when it gets very close the power applied can be insufficient to keep the gears rotating, especially when the servo has a load to move.

There are three things you can try that may help:

  1. A better quality servo will have a less pronounced problem here, but they can get expensive.
  2. Possibly the power supply to the servo does not have enough load capability to keep it moving at small changes.
  3. As it always falls short you can try and work around the problem in software.
    In this case you want to command the servo to go a tiny bit past the actual target, then correct the position back to the target when it is done.
    The amount you will need to overshoot by will probably need some experimentation to figure out.

The other possible explanation is that the potentiometer inside the servo is only that accurate.
In this case there is little you can do apart from replacing the servo with a higher quality one, unfortunately that may be expensive.

I have managed (at last) to solve the problem of the inaccurate return to the zero position of the servos by hacking them as described here

This involves taking a wire from the servo's potentiometer wiper connection and feeding the voltage into an analogue to digital chip connected to the Pi's SPI pins. By comparing the "live" value with a value taken at the start I can correct the position of the servos.

Although the hack enables me to get accurate position information of where the servo is I am having a little difficulty in getting the UtraBorg to make fine adjustments. I have written some code (Python 2.7) that reads a new value from the ADC, compares it to a previous value taken when the servo was in the required position and then attempts to adjust the servo by "nudging" it (in an iterative loop) using UB.SetServoPosition1() with a positive or negative small value, (according to the sign of the difference between the old and new values), reading the ADC for the latest position and so on until the difference between the new value and the required value is minimal.

Although this method usually works occasionally the servo hunts back and forth without finding the minimal value.

I suspect that there is more luck than judgement involved here. If I understand correctly the value that is used by SetServoPosition1() is an absolute value that adjust the PWM signal to the servo to set a defined position and cannot, in theory, be used to send a relative value to the servo, In simple terms, "nudge" the servo a little bit left or right. Am I correct?

If so, is there any other way that I can do this using the UltraBorg library?


piborg's picture

You are right, the SetServoPositionX calls take an absolute position for the servo to use.

What you could do is make your own nudge function like this:

def NudgeServo1(nudgeValue):
	global UB
	current = UB.GetServoPosition1()
	UB.SetServoPosition1(current + nudgeValue)

If you want a more precise nudge of the minmum amount you can use the raw values instead:

def NudgeServo1Up():
	global UB
	current = UB.GetRawServoPosition1()
	UB.CalibrateServoPosition1(current + 1)
def NudgeServo1Down():
	global UB
	current = UB.GetRawServoPosition1()
	UB.CalibrateServoPosition1(current - 1)

This will change the PWM output by the smallest possible value, but it will ignore the on-board limit checking as well.

Your second example using the raw values works well (despite the typo!). My code was similar to your first example but was less accurate. The camera is mounted on two servos fixed base to base enabling the camera to scan 360 deg. so any error in position is multiplied by 2.

Many thanks.

piborg's picture

I am not sure how I missed that typo, corrected now...

Glad to hear that it worked regardless :)

I am still having difficulty in getting servos to return accurately to the start position as set in the UltraBorg Tuning Gui, despite my earlier optimism in this thread. I have abandoned the use of the "hack" to the servos as it proved to be no more accurate than before.

Now, at the beginning of my script I first of all get the start position with "StartUp3 = UB1.GetServoStartup3()" (UB1 because I have two UltraBorgs onboard Diddy).

When I want to return the servo to the start position I use "UB1.CalibrateServoPosition3(StartUp3)"

Sometimes the servo returns accurately and on other occasions it may be 2 or 3 degrees out. However, if when the servo has not "zero'd" correctly and I then open the Tuning Gui, as soon as it starts the Tuning Gui returns the servo accurately to the correct start position.

I have gone through the code for the Tuning Gui to see if I can pinch the bit that does that but although it's not quite Double Dutch, working through someone else's code can be quite trying!

I would welcome any assistance you may be able to offer.

piborg's picture

No problem, I think I can help you here :)

When the Tuning GUI is loaded it positions all of the servos to the default startup position, named CAL_PWM_START in the script. The script uses the value 3000 for this.

This is done using the CalibrateServoPositionX commands like you are already using.

In other words instead of:

StartUp3 = UB1.GetServoStartup3()

it does:


My best guess is that your startup position for the servo is nearly but not quite the 3000 default the Tuning GUI uses. It might be worth actually setting the startup position for the servo to 3000 and see if your current code behaves correctly.

Next time Diddy comes out to play I'll give that a try.

I've now managed to get the servos to return to the startup position with good accuracy. I realised that the closer the servos were to the startup position when instructed to return to it the less accurate they were in achieving it.
The work around is quite simple, before the code issues the return to startup instruction it first issues an instruction to turn to servo maximum followed by the return to startup. There are time.sleep(1) either side of the instructions to give time for the servos to complete the actions.

piborg's picture

That is an interesting solution, I can only assume that the servos have a larger absolute error when they get small position changes.

Glad to hear you have finally found a way to make things reliable :)

Subscribe to Comments for "Servo accuracy"