Week 3, Software

Week 3, Software

Vision

Vision is a concept of target recognition, in which a ring around the camera sends out a beam of light to retroreflective targets in its vicinity, who send back the received beam to the camera, after which the received data can be compiled and changed to do something with it.

I started Vision coding by installing the required libraries for Python, like OpenCV and numpy for our coding language. After much struggle finding the right order of installation, we got the library working, after which we could start coding.5

We started by opening the PiCAM and putting it through an HSV filter, in which we filter all the colors except the ones returning from the vision tapes, which in our case is green. After the HSV (Hue, Saturation, Value) we blurred the image to remove some noise and rough edges. Next, we used the opening and closing variables of OpenCV to further filter out the noise inside and outside of the target to get a nice and clean black and white image to work on. Next, we calculated the contour area of each target, if it was greater than a variable we continued. Next, we created a matrix to store the angles of the targets with their corresponding x-coordinate in it, after which we drew lines and calculated the midpoints and angles of them. That is how far we are now, we still want to use the Ethernet communication set up to the RoboRIO, and fine-tune the values that are being sent.

Everything vision-related is running on a raspberry-pi 3 co-processor to release stress on the RoboRIO. For a vision beacon, we used 3 green bright led rings to get enough light in the camera at a sufficient distance.

AUTO-ALIGNING

Auto-aligning is a technique that will be used to make sure the robot is perfectly aligned to the target. Vision will send the xpixel of the mid-point of the target to the RoboRIO. In LabVIEW the xpixel will be converted to the yaw angle by using the following equation:

To calculate the focal length the following equation will be needed:

Ex:

 Team Rembrandts uses the Raspberry Pi Camera Module V2. The image width is 640 pixels and the horizontal field of view is 62.2° (See specifications of the camera). So the focal length would be the following:

The camera gives that the target is at an xpixel of 100. So then the yaw angle would be the following:

 

 

This is how the formula would look like in LabVIEW. Where X-pixel is received from the Raspberry Pi’s camera module, send to the RoboRIO and the Yaw Angle is the output which will be used as a set point.

This is the LabVIEW code for auto-aligning by manually giving the setpoint without reading the X-pixel, which is something that will be implemented soon. By pressing button 1 the NavX yaw angle resets back to 0 and by holding the button 0 on the joypad the loop will run. First, it reads the yaw angle from the NavX which will be the process variable of the PID controller and the manual setpoint is just any number of -179 to 179. -0.95 and 0.95 are values that shows what the maximum output will be for the motors so now maximum they can go is 95%. 20 is the dt in seconds. 0.1, 3 and 0.5 are values the P I and D values are given by tuning and using the graph which would look something like this after tuning:

So here the setpoint was 45 degrees, where it keeps running the motors and slowly decreases the output the closer it gets to 45 degrees where it almost stops exactly at 45 degrees, thanks to the PID controller.

Things that still have to be achieved for auto-aligning is integrating it together with vision and adding a way where the NavX resets once before it loops after pressing the button 0 on the joypad. Then make all these codes in classes.

MOTION PROFILING

Last week we worked on motion profiling, this a way to let the robot drive autonomous. It will follow a pre-programmed path, this path is generated with the use of a motion profile generator. In this motion profile generator, we can add points and it will calculate the position and velocity for every given time interval.

All the points will be put into a .csv file, our code is able to read out the values and put them into the motion profile buffer after some calculations converting the speed to encoder values.

This is the code for reading the .csv files and putting them into the buffer

The calculations are handled in the subvi.

 

Before running the motion profile motion profile, you must set up motors in your drive train, after that is there only one function in the WPI library concerning executing the motion profile.

We have encountered some problems on the way of creating the motion profile code, like not to be able to run a motion profile backwards after another motion profile, we made some progress by changing the setup of the motors.