calculate part of solid circle in 2D array

  • Thread starter Robert Voigtländer
  • Start date
R

Robert Voigtländer

Hi,

I wonder if someone can help me with a function I need for programming my robot.
I want to update an 2D occupancy grid based on sonar data. The sonar “view angle†is cone shaped. So I need to calculate all cells of a 30° slice of a filled circle.
Something like this: http://www.intechopen.com/source/html/37778/media/fig2..jpg

Using trigonometry is too expensive. I think something like Brenham’s circle algorithm would be a good candidate.
But I need it filled (that would be doable for me I think) and I don’t need the full circle but only a part of it.

I wonder if someone has a function like this lying around. If so I would beglad to not have to reinvent the wheel ïŠ

Thanks and cheers
Robert
 
P

Peter Otten

Robert said:
OK. Found a good one here:
http://www.daniweb.com/software-development/python/threads/321181/python- bresenham-circle-arc-algorithm

Now only filling is needed.
Any help is welcome ...

I think you shouldn't implement the algorithm directly. Rather look for a
library that does it for you.

If you are using PIL like in the link you give above consider using the
draw.arc() function directly:

http://effbot.org/imagingbook/imagedraw.htm

There is also cairographics where you draw a "path" on a "surface".

http://cairographics.org/documentation/pycairo/2/reference/context.html
http://cairographics.org/documentation/pycairo/2/reference/surfaces.html

Sorry, I can't help with the details.
 
R

Robert Voigtländer

Thanks a lot for the links.

I don't need it to be drawn. I need the fields within the arc for some statistical calculations for an occupancy map.
So the target is a 2D array, not a picture.

Robert
 
R

Roy Smith

Robert Voigtländer said:
Thanks a lot for the links.

I don't need it to be drawn. I need the fields within the arc for some
statistical calculations for an occupancy map.
So the target is a 2D array, not a picture.

Robert

If you go with one of the suggestions to use a graphics package to draw
the arc, you can then take the resulting bitmap image and iterate over
it to see which pixels are black and which are white.

But, I'm wondering about your comment that, "Using trigonometry is too
expensive". Trig is cheaper than you think, because it's all done down
in the C libraries, and I wouldn't be surprised if it's not pushed all
the way down to the hardware on most modern machines. Compared to your
Python application code, I suspect the amount of trig needed for this
problem isn't going to be a major factor in timing.

Are you guessing that trig is too slow, or have you actually done some
profiling to measure whether it is or not?

What's the resolution required? Let's assume a 1k x 1k image with the
sensor in the center and you need 1 degree angular resolution. There's
about 2200 pixels in each 1-degree sector. You could pre-compute those
and store 360 sets of 2200 pixels each (about 800k points total). For
any given 30 degree sector, just "or" together the 30 1-degree slices
and you've got your set of pixels for that sector.

Will it be fast enough? Beats me, but it's easy to test.
 
G

Gene Heskett

If you go with one of the suggestions to use a graphics package to draw
the arc, you can then take the resulting bitmap image and iterate over
it to see which pixels are black and which are white.

But, I'm wondering about your comment that, "Using trigonometry is too
expensive". Trig is cheaper than you think, because it's all done down
in the C libraries, and I wouldn't be surprised if it's not pushed all
the way down to the hardware on most modern machines. Compared to your
Python application code, I suspect the amount of trig needed for this
problem isn't going to be a major factor in timing.

Are you guessing that trig is too slow, or have you actually done some
profiling to measure whether it is or not?

What's the resolution required? Let's assume a 1k x 1k image with the
sensor in the center and you need 1 degree angular resolution. There's
about 2200 pixels in each 1-degree sector. You could pre-compute those
and store 360 sets of 2200 pixels each (about 800k points total). For
any given 30 degree sector, just "or" together the 30 1-degree slices
and you've got your set of pixels for that sector.

Will it be fast enough? Beats me, but it's easy to test.

A side comment here if I may. Your 1 degree assumption is, generally
speaking, an extremely coarse answer in terms of the accuracy needed, as we
need accuracies a lot closer to an arc-second than to a whole degree in
robotics.

To get an idea, assume a pick-n-place arm that has to retrieve the part
from whatever method delivers it to within reach of the arm, the arms total
joint to joint to joint length is 4 feet, and that the part is being
delivered hot as in 700F, while the piece it will be installed on is fresh
from contact with a block of dry ice. So its a shrink fit, a one time
assembly because once the temps equalize, the part will be shrunk into
place so tightly that it can only be removed by a lathe.

So the accuracy needed to place this part properly at the start of the push
to set it in place is .01mm, and the push into position must be done to a
..01mm accuracy before the parts freeze together.

I know, this is a bit like asking if that persimmon branch will hold 2
opossums, but what is the accuracy needed in arc seconds per joint
(counting the arms pivot joint, and an articulated wrist for part
orientation, makes at least 5 joints, to pull this motion off every 15
seconds on some GM part assembly line at Delphi?

Real world problem guys. My own experience with machine vision says that
without cameras and video processing running at 1000x the speeds of what I
can buy on fleabay, its way to slow (see camview-emc for an example) to
actually be used in a production line environment, they don't have time for
the arm to stop just short of where its going, take a closeup pix & wait
3-5 seconds for the video to be processed, and then apply the corrective
moves to 'get it right'.

This isn't even worth a reply, its even off-topic, but when an OP posits a
problem, he should posit it in terms of his real world targets, as should
the answers proposed. The correct answer may well help your parent company
sell Delphi some new machines at 5 mil/copy.

Cheers, Gene
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)

Wedding is destiny, and hanging likewise.
-- John Heywood
A pen in the hand of this president is far more
dangerous than 200 million guns in the hands of
law-abiding citizens.
 
G

Gregory Ewing

Gene said:
Your 1 degree assumption is, generally
speaking, an extremely coarse answer in terms of the accuracy needed, as we
need accuracies a lot closer to an arc-second than to a whole degree in
robotics.

That may be true for some applications, but somehow I doubt
that a sonar beam 30 degrees wide would be able to locate
an object with arc-second accuracy.

We should be asking the OP, though. How coarse is the grid?
What kind of angular resolution is needed?

If the grid isn't too fine, I'd suggest doing something very
simple: Iterate over all the cells in the bounding rectangle
of the arc, and test whether each one is inside it.

There are two parts to that: is it inside the circle, and
is it between the two bounding lines?

Both of these can be answered with some straightforward
arithmetic. It's inside the circle if x*x + y*y <= r*r,
and a couple of vector dot products will answer the other
one. You can probably find a way to do all that in
parallel using numpy.

That leaves finding the slopes of the bounding lines.
Using trig is probably fast enough for that -- it's only
a couple of trig calls for the whole thing, which in
Python will be totally swamped by all the other calculations
you're doing. Or if you really want to avoid trig, use
a lookup table.
 
R

Robert Voigtländer

Great discussion started here ïŠ

To answer some of the questions and to give more background:

- The grid resolution is 1x1cm. The problem starts when the distance of thereadings gets high. Then a 1° resolution doesn’t cover all cells anymore. And cells get counted double on short distance – whichneeds to be compensated. So for larger distances I need to get down to about 0.1 degree in reality to not miss cells.
- So far I ran the logic for the robot on a PC. No problem here with the performance using trigonometry (in java). But I am on my way porting everything to a raspberry pi which will be on the robot itself.
Due to the limited calculation power I want to avoid too heavy calculation for regular jobs. The scans come in about every 40ms. So the calculation needs to be as fast as possible.
- I surely will benchmark the new approach vs. the trigonometry solution which I will port from my current java solution. Maybe I will be surprised – but I think the trig approach will be slower.

The lookup table is also a great idea! I can produce it for any resolution needed and on runtime it is simply combining lists for the slices needed…..not bad.


So far I have the circle points as well as the lines – using the Brenham’s circle and line functions.
Code: http://pastebin.com/GzUzuAnb

I just need to get the filling done…..


Robert
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,230
Members
46,817
Latest member
DicWeils

Latest Threads

Top