Thursday, July 20, 2017

Earth's Shadow

Many people have seen Earth's shadow, but were unaware of its presence. After the sun sets, or before it rises, there's about a 20 minute window to watch Earth's shadow projected on the atmosphere. It appears as a slightly darker blue band above the horizon with a reddish band above it.


I produced a short time-lapse movie of Earth's shadow in the west as the sun rises in the east. The movie has two parts. The first part is in visible light, or how we would see it with our eyes. The second part is through the eyes of near infrared. Interesting that the slightly smoky skies we're dealing with in the Treasure Valley prevents our eyes from seeing the anticrepuscular rays but that NIR cuts right through the haze. What are anticrepuscular rays? Well, crepuscular rays are the dark shadow of clouds projected into the atmosphere as lines or rays. Anticrepuscular rays are those cloud shadows projected onto the opposite end of the sky. They point to the anti-solar point, or the point in the sky are is opposite the sun.


You an see my Morning Movie at the NearSys YouTube channel.




Earth's shadow on visible light, or how your eyes would see it.



This is Earth's shadow in near infrared. Notice how much darker it appears.
 

Saturday, July 15, 2017

UAVSonde Data for NearSys Station, 15 July 2017

UAVSonde data were collected at 8:10 PM. Here are the data.

Altitude: 2,227 feet
Temperature: 103*F
Relative Humidity: NA
Pressure: 917.6 mb

Altitude: 2,811 feet
Temperature: 100*F
Relative Humidity: NA
Pressure: 914.6 mb

The GPS reciever misbehaved at high altitude. If this repeats, the GPS will be replaced.

Friday, July 14, 2017

Can Robotic Vision Guide a Robot Down a Row of an Orchard?

Based on this color-near infrared image, is this robot driving down the middle of the orchard row? Can the robot determine how much and in what direction it must adjust its driving path? Image from the NNU Robotics Vision Lab. 


Orchard work is labor intensive and labor costs money. In order to keep costs down, agriculture, along with manufacturing, is trying to automate processes. In agriculture, automated systems means things like programming robots to drive down the rows between trees in an orchard to inspect the fruit or spray the trees. For robot to drive through an orchard without crashing into trees, it must first recognize trees in its robotic vision, determine its location based on that image, and then plan a driving path. I was given an opportunity to analyze an image recorded by the camera system of a robot built by NNU and see what I could come up with. Here's what I did, using ImageJ to analyze the image above (as told by the images generated in each step). With a pinch of luck, this will help robots see trees from the orchard (forest).


First, crop the image. I took about the center 1/3rd of the image and it doesn't seem my method doesn't cares exactly how much of the image is cropped, as long as it includes the tree trunks.  


Next, split apart the three color channels and retain just the near infrared. Notice that the tree trunks appear very dark compared to the leaves, grass, and even the sky. On  cloudy day,  the sky should appear even brighter, which makes the next process even easier. 
The image is then segmented by setting a threshold using the Otsu method. However, in this case, I selected to invert the image by isolating the high end of the histogram. I suspect one could invert the image first and then let the Otsu method segment the image as it determined is best. Segmenting an image means finding a good threshold value in to which to split the image into either black or white pixels.  














































After the image is segmented, its filtered to remove the more distant trees and grasses. The filtering that does this works by dividing the image into ten pixel groups and making every pixel in the group as bright as the brightest pixel in the group of ten. So it's called maximum filtering.  

Now the image is scaled. The scaling decreases the x-axis by a factor of 2 (scaling factor is 0.5) and the y-axis by a factor of four. So essentially, the image is being stretched vertically and shrunk horizontally. The image is shrunk in the x-axis to keep the image size from becoming too large. 


After scaling, the image is cropped to keep just the middle third. The stretching and cropping is repeated a second time.  
This is what the image looks like after the second round of scaling and cropping.


Now the image is made into a binary file. In other words, each pixel is just a 1 or a 0.
In this step, the image is skeletonized. That means the middle of each region is replaced with a line running through its middle. So notice that this process has turned the tree trunks visible in the near infrared image into a series of vertical black lines. 
Now the robot's vision system just needs to detect the black lines across the image. I think the reference line can be taken at any height across the image.
Final Thoughts
My feeling is that the vision system should count the x-axis location of a black line as it goes across the image. Since the lines are a single pixel wide, the location of each black line becomes a single number. The location of each black pixel must be taken in reference to the center of the image (in other words, the origin of the horizontal sampling line is the center of the image. Pixel locations left of the center of the image have negative values and pixel locations right of the center of the image have positive values. Now add the pixel values together. The sum indicates the center of the tree rows relative to the center of the image. If the sum is positive, then the robot needs to drive forward and right. If the sum of the pixel values is negative, the robot needs to drive forward and to the left. And of course, if the sum is zero, the robot just needs to drive forwards. The absolute value of the pixel sum indicates just how far off the center the robot is.




For higher accuracy and certainty, the robotic vision system might want to take measurements across the final skeletonized image in several rows.

Imaging Fruit from the Ground Up

A suggestion given at a program review in the Robotics Vision lab where I'm working this summer was to image fruit from below the trees. The reasoning is that the fruits would hang down where the leaves can't block the view to them. It sounds like a good idea, however, I found the thermal imager would respond well to this. Why?




Yep, there's fruit here. The peaches are still green, but visible from beneath the tree.
The thermal infrared image from near the center of the visible image above. There are peaches to the left and bottom left of the dark hole above the center of the image.
The issue with the Seek Reveal (at least with the way I have it set up) is that it scales the colors of its image based on the range of temperatures it detects. Looking up means the imager will see the sky. Now the sky is very cold in thermal infrared, meaning the sky becomes the black color setting. The warmest tree leaf or fruit meanwhile become the white color setting. Since there is such a large difference between the cold sky and the warm leaves and fruit, any difference in temperature between a fruit and a leaf is tiny in comparison. Therefore, no difference between fruit and leaves can be made out in the image.


Thermal imaging may still be useful for distinguishing between fruit and leaves with robotic vision. That's because fruit, being more massive than a leaf, should maintain a warmer temperature after a cool night. However, for thermal imaging to detect this, the thermal image needs to be taken without the sky being in view. Or, the thermal imager can't auto-scale its image based on temperature extremes it detects.


I may need a different thermal imager to make robotic vision possible.    

Wednesday, July 12, 2017

ImageJ

Jim, a friend reading my blog, suggest I try out ImageJ for image processing. I had never heard of this program before and needed a few days before I could find the time to check it out. And boy am I glad I did.


ImageJ is a Java app that was developed by the National Institute of Health (the project developer was Wayne Rasband) to perform image processing. You can download the application from its location on the NIH Website.


After installing it n my laptop at NNU, I just click the ij executable file to get ImageJ's simple to use menu to pop up.


That's right, the ImageJ window is pretty small. Just a simple menu, really. 


It just took six total clicks to split a color image into its three channels. First, I had to open the image with File, Open, and then click on the image I wanted. After opening the image, I then used 3 more clicks to split the color image into its three RGB channels. I clicked on Image, Color, and Split Channels. The original color disappeared and was replaced with three images, one for each color.


I like that ImageJ automatically gives each image window a name that includes its color layer. 
Next I tested the subtraction of images. Subtracting images can be important for isolating cherries in an image of a cherry tree, because green leaves are bright in both red and green but cherries are only bright in red. Subtracting images requires the use of the Paste Control application. You'll find it under the Edit option.


The Paste Control Application showing some of its paste options in a pull down menu.
One color layer is subtracted from a second one first making sure that the Subtract option in Paste Control is selected. Then click on the color layer to be subtracted and then clicking Edit and Copy in ImageJ. Then click the second color layer, the one you want to subtract the first layer from, and then click Edit and Paste. Note that the order of the subtraction is important. Subtracting the red layer from the green layer does not produce the same result as subtracting the green layer from the red layer.


The red color was on the left, but it's been converted in the red layer minus the green layer with just a few clicks. 
An image can be segmented with ImageJ by first finding a global threshold. Setting a threshold is an interactive process, in that you can shift two sliders to set the high and low limits, if you desire. You can also let ImageJ set the boundaries. So click Image, Adjust, Threshold. The Threshold application opens up and the clicked layer suddenly appears as a segmented image with the default threshold values.



The Threshold application pops up in its default setting. Notice the modified red layer is displayed in the current threshold value.
You can now adjust the sliders in the Threshold application to set what range of pixel values to threshold with. It's interactive, so as you adjust the left and right limits, the image displays what it will look like under that threshold setting.


After applying a threshold and segmenting a layer, you can detect the edges of the image by clicking Process and then Find Edges.


These are the edges of the segmented layer displayed above.
Images, or layers, can be merged together to create a new color image. This is accomplished by clicking Image, Color, Merge Channels... Under the Merge Channels application, select which image to make which color and then click Okay.


A three color image of the edges detected in the original color image. I don't know if this is particularly useful, but it is pretty cool looking.
Counting the number of objects in an image is more important than creating pretty images of edges. So that's what I'm working on next. More about that soon (I hope).

Monday, July 10, 2017

UAVSonde Measurements of Surface and Air Temperature Multiople Times Throughout the Day

I fly my UAVSonde flights once per week, usually on the weekend, to gather temperature, pressure, and relative humidity once a day. I began to wonder how these conditions changed throughout the day. I know the ground temperature increases until around 4:00 PM before it begins falling again. But does this hold true 400 feet above the ground? I am investigating conditions at an altitude of 400 feet because that's how high my drone can legally fly.


So I ran the same sensors and collected data the same way five times on Sunday, July 9th. Now it was stinking hot on Sunday. My part of Idaho broke a temperature record with highs above 100 degrees. Also, the quadcopter lifted off from my driveway. That driveway was extra toasty. It was obviously the case if you looked at the driveway and the neighboring lawn with the thermal imager. Anyway, once I completed  the first flight, I was committed to repeating the rest of the flights in the same manner. Below are the results I got from the flights. The time is in 24 hour time.


 


First notes,  GPS receivers have errors in their measurements, so I needed to take an average ground elevation and air altitude for these charts. Second, the pressure sensor may be effected by the temperature. Third, the surface temperature is taken right above the cement driveway.


The first chart shows that the ground temperature did indeed increase throughout the day and began cooling at around 3:00 PM. It then spiked in temperature later in the evening. The air temperature at 400 feet AGL lagged behind the surface temperature by 7 to 10 degrees before cooling off by 7:00 PM.


The second chart shows the air pressure at 400 feet AGL is always lower than the surface pressure, but that amount of difference changes throughout the day. Also that the surface pressure spiked at around 7:00 PM. Meanwhile, the air pressure at 400 feet AGL spiked earlier at around 2:30 PM.


I really need to repeat this experiment again when it's not quite so hot. I'll also experiment with calibrating the pressure sensor with temperature in order to remove this possible effect. Finally, I'll launch the quadcopter from the lawn.


It's said you can look up lots of information on the Internet these days. It's probably true that someone already knows how the temperature of the air several hundred feet above the ground changes throughout the day. But I say why look it up when you could find out for your self. In the process, you learn more STEM and the importance of measurement. And you'll develop more skills and hone the ones you already have. An that's not a bad way to spend a Sunday afternoon.  

Flame Wheel Quadcopter

Part of my summer research is trying to find a good drone for my Introduction to Engineering class next year. I feel this list of requirements is suitable for this task.


The drone is affordable
The drone is student-built (this way students get a better idea of how it works)
The drone is flexible in its control system
The drone can carry a payload


Dr. Bulanaon suggested I look into the DJI Flame Wheel as a class drone. So we ordered one and I began assembling it after it arrived. The Flame Wheel was the second drone I had seen, so it was nice to have a change to work with it.




Open the box and you'll find bags of drone parts.
I needed to download the directions before I could begin assembling the Flame Wheel.  I also needed to watch a video to fill in the assembly steps not covered in the online directions. But after a couple of hours or assembly, disassembly, and reassembly, I ended up with this fine product.


Assembled and looking for an RC receiver.


The Flame Wheel is a generic drone, it's designed for any number of RC systems and batteries. So after a little more investigation, I've developed the following shopping list.


FrSky RX8R 8/16 channel, S Bus receiver
FrSky Taranis Q X7 16 channel, 2.4 GHz ACCST transmitter
Storm 3s 5,500 mAh LiPo (with XT60 connector)


The 16 channel RC control system will allow remote pilots to control the flight of their drone and the gimbal carrying the drone's imagining system. However, there will be an extensive setup procedure to unite the RC control system to the flight controller (NAZA Lite) used by the Flame Wheel. I'll update my blog with the procedure I went through, so keep your eyes opened.


The drone will be Bind-n-Fly, meaning the receiver will only respond to commands from the transmitter its bound to. This way, many students can fly their drones simultaneously without interfering with each other. Of course, with multiple drones airborne, students will need to work in teams of pilots and visual observers for safety. Otherwise, I think my students are really going to like the Flame Wheel. I know their teacher is.