The original vision system was based on scanning a room with an ultrasonic sensor and creating a 3D space time SQL database.
It was a good idea, but had several problems:
So I stepped up to the next least expensive option and got a Kinect v2 for windows 10. This introduced new problems of course. The gateway was not powerful enough to run the Kinect I didn ' t have the money to buy a new BIG laptop. So I decided to go with the best I could afford for the vision and go with 2 PC ' s. The Kinect runs several data streams together and separately. I am using infrared, 3d, 1080p 2d color and a feed that identifies skeletal parts. 20 joints for 6 people. The 3d camera works from .5 to 8 meters in 1 millimeter increments and is programed in c#. After I got the Kinect installed, I found the arm was not long enough. I had to get the robot claw .5 meters in front of the camera Pictures below |
A couple broken arms later and I came up with Armzila III
I know it looks too big, But I got tired of everything being too small. The arm has a reach of 1.2 meters. The actuator arms can lift 240 pounds on the bottom and 340 pounds on the top. But the black 6 degree of freedom arm hardware did not like running with the base in a vertical position, the nuts kept falling off and I broke the claw. | >
|
So I bought a new claw |
I put a stepper motor on the end of the arm beam to twist the claw |
|
|
Each actuator needs 4 relays to flop the polarity on the actuator inputs. The code for the actuators on the two beams has been written and tested. I also tested the hall effect sensor code. |
||
Actuator for top beam with hall effect sensor | Actuator for bottom beam. There is a replacement scheduled for this actuator like the one pictured on the left with the hall sensor |
The arm base turntable and the claw are driven by a stepper motor. |
I didn ' t write a graphical user interface for the arm. I found the gui I have written so far, with screen buttons and sliders is a bit cumbersome to use. I wanted to try using arcade type controls. I decided to go with thumb joysticks for now. |
The original vision system was based on scanning a room with an ultrasonic sensor and creating a 3D space time SQL database.
It was a good idea, but had several problems:
So I stepped up to the next least expensive option and got a Kinect v2 for windows 10. This introduced new problems of course. The gateway was not powerful enough to run the Kinect I didn ' t have the money to buy a new BIG laptop. So I decided to go with the best I could afford for the vision and go with 2 PC ' s. The Kinect runs several data streams together and separately. I am using infrared, 3d, 1080p 2d color and a feed that identifies skeletal parts. 20 joints for 6 people. The 3d camera works from .5 to 8 meters in 1 millimeter increments and is programed in c#. The screen is broken up into 6 panels. Starting at the top left is the numerical representation of the database and program function buttons. Middle top is the real-time infrared and skeletal feed. The skeletal feed tracks 20 points for 6 people. The right top panel is a snapshot panel for the 3D camera. Lower left panel is the 1080p real-time feed. Lower right is the 3d camera real-time feed. The application maps the room from .5 to 8.5 meters in 1 millimeter slices and puts them in a database. |
The two pc ' s that run the platform are connected to my wireless network and are operated remotely using Windows Remote Desktop connection The drive motors, cameras, ultrasonic sensor, the 2 pan tilts, 6 degree of freedom arm, and auxiliary stepper motor all run on the Gateway and are controlled through the GUI above. The interface was written in visual basic. The platform currently has 5 Arduinos, 4 of which run on the Gateway. The Arduinos are separated to facilitate testing and the electrical application type. For example, stepper motor applications and actuator applications get along well together, but don’t get along with servo applications. It is also easier to make changes without affecting a lot of systems. You can also test parts of the system without firing up the whole shebang. That brings us to the two boxes in the upper left hand corner. When you fire up the program looks to see if there are Arduinos plugged in and lists them in the listbox. | You can assign the program function and serial port assignment to the USB port by selecting the radio button and clicking on the USB port in the window. When you plug in a USB device, press the refresh button and the USB device will appear in the list box and you can assign it to the serial port as in the paragraph above. For the most part, if the Arduinos stay plugged in, there USB com names will stay the same and you can assign the serial ports without worrying. But occasionally, the USB com names change, typically if they are unplugged and plugged in again. What I do in this case is shut down the machine, unplug the USB ' s, fire up the machine, and then read the usb devices one at a time and assign them to the serial ports again. The tabbed listbox is the serial log from the Arduinos. Click the tab for the Arduino log you want to monitor. For now the rest of the controls are pretty self-explanatory and can be expanded on later. The vision system and armzilla run on the Asus. Armzilla currently doesn’t have a GUI. I have 5 thumb joysticks hooked directly to the Arduino The vision system uses a Kinect 2 for windows. The GUI is written in C# using Windows Presentation Foundation. |